2020-05-21
§
|
09:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11267 and previous config saved to /var/cache/conftool/dbconfig/20200521-095100-marostegui.json |
[production] |
09:28 |
<mutante> |
deneb - sudo systemctl reset-failed to clear Icinga alerts about systemd degraded state |
[production] |
09:16 |
<elukey> |
move Druid Analytics SQL in Superset to druid://an-druid1001.eqiad.wmnet:8082/druid/v2/sql/ |
[analytics] |
09:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11266 and previous config saved to /var/cache/conftool/dbconfig/20200521-091245-marostegui.json |
[production] |
09:05 |
<elukey> |
move turnilo to an-druid1001 (beefier host) |
[analytics] |
09:01 |
<mutante> |
LDAP - added lmata to wmf group (T253277) |
[production] |
08:55 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from esams - T253196 |
[production] |
08:52 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from eqsin - T253196 |
[production] |
08:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool db1143 with minimal weight for the first time T252512', diff saved to https://phabricator.wikimedia.org/P11265 and previous config saved to /var/cache/conftool/dbconfig/20200521-084933-marostegui.json |
[production] |
08:47 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from eqiad/eqord - T253196 |
[production] |
08:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1143 to the list of s4 hosts, depooled - T252512', diff saved to https://phabricator.wikimedia.org/P11264 and previous config saved to /var/cache/conftool/dbconfig/20200521-084226-marostegui.json |
[production] |
08:34 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from dfw - T253196 |
[production] |
08:27 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from ulsfo - T253196 |
[production] |
08:20 |
<XioNoX> |
Delete ARIN route object for 198.35.26.0/23 - T253196 |
[production] |
08:15 |
<elukey> |
roll restart of all druid historicals in the analytics cluster to pick up new settings |
[analytics] |
08:13 |
<XioNoX> |
Delete ROA for 198.35.26.0/23 - T253196 |
[production] |
08:10 |
<XioNoX> |
repool ulsfo - T253196 |
[production] |
08:03 |
<XioNoX> |
Shrink ulsfo's 198.35.26.0/23 to 198.35.26.0/24 - T253196 |
[production] |
07:29 |
<XioNoX> |
depool ulsfo - T253196 |
[production] |
07:22 |
<marostegui> |
Purge events from tendril.global_status_log older than 24h - T252331 |
[production] |
07:03 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 fully', diff saved to https://phabricator.wikimedia.org/P11263 and previous config saved to /var/cache/conftool/dbconfig/20200521-070335-jynus.json |
[production] |
06:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1091 - T252512', diff saved to https://phabricator.wikimedia.org/P11261 and previous config saved to /var/cache/conftool/dbconfig/20200521-065858-marostegui.json |
[production] |
06:28 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 with 50% weight', diff saved to https://phabricator.wikimedia.org/P11260 and previous config saved to /var/cache/conftool/dbconfig/20200521-062823-jynus.json |
[production] |
06:04 |
<vgutierrez> |
pool cp5012 - T251219 |
[production] |
05:42 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 with low weight', diff saved to https://phabricator.wikimedia.org/P11259 and previous config saved to /var/cache/conftool/dbconfig/20200521-054231-jynus.json |
[production] |
05:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Set enwiki as read-only=off after maintenance T251982', diff saved to https://phabricator.wikimedia.org/P11258 and previous config saved to /var/cache/conftool/dbconfig/20200521-050328-marostegui.json |
[production] |
05:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Set enwiki as read-only for maintenance T251982', diff saved to https://phabricator.wikimedia.org/P11257 and previous config saved to /var/cache/conftool/dbconfig/20200521-050029-marostegui.json |
[production] |
03:38 |
<wm-bot> |
<bd808> Hard stop/start cycle to enable --canonical |
[tools.toolviews] |
01:03 |
<krinkle@deploy1001> |
Synchronized wmf-config/mc.php: Ic9efa98312b (duration: 01m 08s) |
[production] |
00:34 |
<bd808> |
Deleting gird job log files older than 7 days directly on the NFS backend server (T248188) |
[tools.zoomviewer] |
2020-05-20
§
|
22:35 |
<bstorm_> |
created paws-k8s-worker-1/2/3/4 T211096 |
[paws] |
22:13 |
<James_F> |
Docker: Publishing new zuul-cloner image with zuul-clonemap.yaml in it T252955 |
[releng] |
22:12 |
<bstorm_> |
created paws-k8s-haproxy-1/2 with antiaffinity group T211096 |
[paws] |
21:36 |
<bstorm_> |
created paws-k8s-control-1/2/3 with appropriate sec group and server group T211096 |
[paws] |
21:18 |
<James_F> |
Docker: Publishing new npm-test images with npm-install-dev.py in them T252955 |
[releng] |
20:16 |
<herron> |
logstash1011:~# kafka-preferred-replica-election --zookeeper conf1004.eqiad.wmnet,conf1005.eqiad.wmnet,conf1006.eqiad.wmnet/kafka/logging-eqiad |
[production] |
19:27 |
<robh> |
cp5012 still offline for mem tests, "fast" testing complete without errors and extended testing in progress. system firmware was updated before testing. T251219 |
[production] |
18:59 |
<bstorm_> |
created anti-affinity group "controlplane" T211096 |
[paws] |
18:10 |
<XioNoX> |
accept 198.35.27.0/24 from Anycast peers on all routers - T253196 |
[production] |
18:01 |
<XioNoX> |
add BGP between authdns2001 and cr1-codfw - T253196 |
[production] |
17:57 |
<XioNoX> |
accept 198.35.27.0/24 from Anycast peers on cr3-ulsfo - T253196 |
[production] |
17:44 |
<robh> |
cp5012 rebooting for troubleshooting |
[production] |
17:02 |
<bblack> |
dns* + authdns* - disabling puppet to test https://gerrit.wikimedia.org/r/#/c/operations/puppet/+/597311/ |
[production] |
16:53 |
<bblack> |
kraz.wikimedia.org ( https://wikitech.wikimedia.org/wiki/IRCD ) - stopping ircecho then ircd, then restarting them in reverse order - T239993 |
[production] |
16:38 |
<bstorm_> |
deleting the old shut-down VMs from the last effort to rebuild paws T211096 |
[paws] |
16:36 |
<bstorm_> |
cleaned up the old DNS entries for the external LBs that have been off for a year |
[paws] |
16:01 |
<jayme@deploy1001> |
helmfile [CODFW] Ran 'sync' command on namespace 'mathoid' for release 'production' . |
[production] |
16:01 |
<jayme@deploy1001> |
helmfile [CODFW] Ran 'sync' command on namespace 'mathoid' for release 'canary' . |
[production] |
15:42 |
<elukey> |
update puppet compiler's facts |
[production] |
15:21 |
<moritzm> |
installing libssh security updates |
[production] |