151-200 of 10000 results (40ms)
2022-09-13 ยง
16:09 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Maintenance [production]
16:07 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on clouddb[1013,1017,1021].eqiad.wmnet with reason: Maintenance [production]
16:07 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on clouddb[1013,1017,1021].eqiad.wmnet with reason: Maintenance [production]
16:07 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1154.eqiad.wmnet with reason: Maintenance [production]
16:07 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1154.eqiad.wmnet with reason: Maintenance [production]
16:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1191 (T314041)', diff saved to https://phabricator.wikimedia.org/P34628 and previous config saved to /var/cache/conftool/dbconfig/20220913-160536-ladsgroup.json [production]
15:57 <btullis> rolling out updated hadoop packages to an-airflow1003 [analytics]
15:55 <btullis> rolling out upgraded hadoop client packages to stat servers. [analytics]
15:51 <btullis> restarting eventlogging_to_druid_network_flows_internal_hourly.service eventlogging_to_druid_prefupdate_hourly.service refine_event_sanitized_analytics_immediate.service refine_event_sanitized_main_immediate.service [analytics]
15:50 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1189.eqiad.wmnet with reason: down [production]
15:50 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1189.eqiad.wmnet with reason: down [production]
15:49 <btullis> restarting eventlogging_to_druid_navigationtiming_hourly.service on an-launcher1002 [analytics]
15:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1189', diff saved to https://phabricator.wikimedia.org/P34626 and previous config saved to /var/cache/conftool/dbconfig/20220913-154810-root.json [production]
15:46 <btullis> restarting eventlogging_to_druid_editattemptstep_hourly.service on an-launcher1002 [analytics]
15:44 <btullis> cancel that last message. Upgrading hadoop packages on an-launcher instead. They were inadvertently omitted last time. [analytics]
15:42 <ebernhardson@deploy1002> Finished deploy [wikimedia/discovery/analytics@031604d]: Automatically drop hitsorical partitions of subgraph analysis (duration: 02m 07s) [production]
15:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1145.eqiad.wmnet with reason: Maintenance [production]
15:41 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1145.eqiad.wmnet with reason: Maintenance [production]
15:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112 (T314041)', diff saved to https://phabricator.wikimedia.org/P34625 and previous config saved to /var/cache/conftool/dbconfig/20220913-154151-ladsgroup.json [production]
15:40 <ebernhardson@deploy1002> Started deploy [wikimedia/discovery/analytics@031604d]: Automatically drop hitsorical partitions of subgraph analysis [production]
15:39 <btullis> Going to downgrade hadoop on ann hadoop-worker nodes to 2.10.1 [analytics]
15:36 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
15:30 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
15:30 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
15:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112', diff saved to https://phabricator.wikimedia.org/P34624 and previous config saved to /var/cache/conftool/dbconfig/20220913-152644-ladsgroup.json [production]
15:23 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
15:21 <btullis> failed over hive to an-coord1002 via DNS https://gerrit.wikimedia.org/r/c/operations/dns/+/831906 [analytics]
15:20 <btullis> restarted yarn service on an-master1002 to make the active host an-master1001 again. [analytics]
15:18 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
15:17 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
15:17 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
15:16 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
15:14 <dancy@deploy1002> Finished scap: testwikis wikis to 1.39.0-wmf.28 refs T314190 (duration: 04m 31s) [production]
15:13 <volans@cumin1001> END (PASS) - Cookbook sre.deploy.python-code (exit_code=0) homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Release v0.6.0 - volans@cumin1001 [production]
15:12 <volans@cumin1001> START - Cookbook sre.deploy.python-code homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Release v0.6.0 - volans@cumin1001 [production]
15:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112', diff saved to https://phabricator.wikimedia.org/P34623 and previous config saved to /var/cache/conftool/dbconfig/20220913-151138-ladsgroup.json [production]
15:11 <btullis> restart hive-server2 and hive-metastore service on an-coord1002 to pick up new version of hadoop [analytics]
15:10 <dancy@deploy1002> Started scap: testwikis wikis to 1.39.0-wmf.28 refs T314190 [production]
15:08 <dancy@deploy1002> deploy-promote aborted: (duration: 00m 02s) [production]
14:59 <dancy@deploy1002> Finished scap: testwikis wikis to 1.40.0-wmf.1 refs T314190 (duration: 04m 43s) [production]
14:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1112 (T314041)', diff saved to https://phabricator.wikimedia.org/P34622 and previous config saved to /var/cache/conftool/dbconfig/20220913-145631-ladsgroup.json [production]
14:55 <btullis> rolling out updated hadoop packages to analytics-airflow (cumin alias) hosts [analytics]
14:54 <dancy@deploy1002> Started scap: testwikis wikis to 1.40.0-wmf.1 refs T314190 [production]
14:51 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: apply [production]
14:50 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply [production]
14:50 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply [production]
14:49 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply [production]
14:47 <dancy@deploy1002> deploy-promote aborted: (duration: 01m 03s) [production]
14:47 <dancy@deploy1002> prep aborted: (duration: 00m 12s) [production]
14:46 <moritzm> restarting FPM/Apache on mediawiki canaries [production]