1501-1550 of 10000 results (30ms)
2023-07-19 ยง
13:29 <fabfur> aborted previous operations, no need to disable puppet to apply that CR (https://gerrit.wikimedia.org/r/c/operations/puppet/+/939661) (T342211) [production]
13:27 <fabfur> temporary disable puppet on cp3052 to apply https://gerrit.wikimedia.org/r/c/operations/puppet/+/939661 (T342211) [production]
13:26 <Lucas_WMDE> UTC afternoon backport+config window done [production]
13:15 <btullis@cumin1001> START - Cookbook sre.hosts.reimage for host analytics1073.eqiad.wmnet with OS bullseye [production]
13:13 <lucaswerkmeister-wmde@deploy1002> Finished scap: Backport for [[gerrit:939374|Fix incorrect use of UseLegacyMediaStyles (missing "wg" prefix) (T318433)]] (duration: 10m 47s) [production]
13:04 <lucaswerkmeister-wmde@deploy1002> ssastry and lucaswerkmeister-wmde: Backport for [[gerrit:939374|Fix incorrect use of UseLegacyMediaStyles (missing "wg" prefix) (T318433)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
13:02 <lucaswerkmeister-wmde@deploy1002> Started scap: Backport for [[gerrit:939374|Fix incorrect use of UseLegacyMediaStyles (missing "wg" prefix) (T318433)]] [production]
12:43 <joal@deploy1002> Finished deploy [airflow-dags/analytics@87be328]: Refactor cassandra loading jobs (duration: 00m 14s) [production]
12:43 <joal@deploy1002> Started deploy [airflow-dags/analytics@87be328]: Refactor cassandra loading jobs [production]
12:38 <joal> deploy Airflow analytics dags - Fullrevampof cassandraloading jobs [analytics]
12:28 <wm-bot> <lucaswerkmeister> deployed 4fa53fae89 (l10n updates: pt-br) [tools.lexeme-forms]
12:27 <jayme@deploy1002> helmfile [staging-codfw] DONE helmfile.d/services/ipoid: apply [production]
12:27 <jayme@deploy1002> helmfile [staging-codfw] START helmfile.d/services/ipoid: apply [production]
12:22 <jbond> switch puppertboard.wikimedia.oreg to use puppet7 infrastructre [production]
12:22 <jayme@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
12:22 <jayme@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
12:17 <jbond@cumin1001> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) puppetboard.discovery.wmnet on all recursors [production]
12:17 <jbond@cumin1001> START - Cookbook sre.dns.wipe-cache puppetboard.discovery.wmnet on all recursors [production]
12:17 <jbond@cumin1001> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) puppetboard-next.discovery.wmnet on all recursors [production]
12:17 <jbond@cumin1001> START - Cookbook sre.dns.wipe-cache puppetboard-next.discovery.wmnet on all recursors [production]
11:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts dbproxy1016.eqiad.wmnet [production]
11:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: dbproxy1016.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - ladsgroup@cumin1001" [production]
11:47 <ladsgroup@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: dbproxy1016.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - ladsgroup@cumin1001" [production]
11:45 <ladsgroup@cumin1001> START - Cookbook sre.dns.netbox [production]
11:40 <ladsgroup@cumin1001> START - Cookbook sre.hosts.decommission for hosts dbproxy1016.eqiad.wmnet [production]
11:22 <jennifer_ebe> deploying refinery to hdfs [analytics]
11:13 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@eaabff2] (duration: 01m 43s) [production]
11:12 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@eaabff2] [production]
11:11 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2] (thin): Regular analytics weekly train THIN [analytics/refinery@eaabff2] (duration: 00m 04s) [production]
11:11 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2] (thin): Regular analytics weekly train THIN [analytics/refinery@eaabff2] [production]
11:09 <jebe@deploy1002> Finished deploy [analytics/refinery@eaabff2]: Regular analytics weekly train [analytics/refinery@eaabff2] (duration: 10m 24s) [production]
10:59 <jebe@deploy1002> Started deploy [analytics/refinery@eaabff2]: Regular analytics weekly train [analytics/refinery@eaabff2] [production]
10:57 <jennifer_ebe> deploying refinery using scap [analytics]
10:54 <btullis> migrating hive services to an-coord1002 via DNS for T329716 (to permit restart of hive services on an-coord1001). [analytics]
10:15 <btullis> restarting oozie service on an-coord1001 for T329716 [analytics]
10:14 <btullis> restarting presto-service on an-coord1001 for T329716 [analytics]
10:06 <btullis> restarting java services on an-test-coord1001 for JVM update [analytics]
10:02 <elukey@deploy1002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
09:54 <jayme@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
09:54 <jayme@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
09:50 <jayme@deploy1002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
09:48 <jayme@deploy1002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
09:44 <wm-bot> <root> webservice restart ref. cloud-l [tools.pb]
09:43 <isaranto@deploy1002> helmfile [ml-serve-eqiad] Ran 'sync' command on namespace 'experimental' for release 'main' . [production]
09:14 <btullis@deploy1002> Finished deploy [airflow-dags/analytics_test@be05071]: (no justification provided) (duration: 00m 04s) [production]
09:14 <btullis@deploy1002> Started deploy [airflow-dags/analytics_test@be05071]: (no justification provided) [production]
09:13 <btullis> correction: to an-test-client1002 [analytics]
09:13 <btullis> deploying airflow-dags for analytics_test to an-test-client1001 [analytics]
09:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1180 (re)pooling @ 100%: Repooling after maintenance', diff saved to https://phabricator.wikimedia.org/P49599 and previous config saved to /var/cache/conftool/dbconfig/20230719-091205-root.json [production]