2020-05-21
§
|
12:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11270 and previous config saved to /var/cache/conftool/dbconfig/20200521-120555-marostegui.json |
[production] |
12:05 |
<dzahn@cumin1001> |
conftool action : set/pooled=no; selector: name=mw215[8-9].codfw.wmnet |
[production] |
11:18 |
<hnowlan> |
Removed changeprop from scb hosts |
[production] |
11:13 |
<ZI_Jony> |
restarted CVNBOT20 |
[cvn] |
11:04 |
<vgutierrez> |
rolling restart of ncredir servers for kernel update |
[production] |
10:40 |
<wm-bot> |
<rhinosf1> drop remind and use sopel default |
[tools.zppixbot] |
10:17 |
<vgutierrez> |
restart of acme-chief servers for kernel update |
[production] |
10:13 |
<jbond42> |
deploy CI for pupet privcate repo |
[production] |
10:11 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11268 and previous config saved to /var/cache/conftool/dbconfig/20200521-101100-marostegui.json |
[production] |
10:10 |
<wm-bot> |
<rhinosf1> drop remind from exclude && git pull && reboot twice for tests when travis passes |
[tools.zppixbot-test] |
10:07 |
<mutante> |
replaced backend of people.wikimedia.org - people1001 will be inaccessible, replaced with people1002 on buster. all home dirs have been synced over, there should be no difference except you have to use people1002 now for uploads (T247649) |
[production] |
10:06 |
<godog> |
test adding --sni to check_http -S on icinga2001 - T253292 |
[production] |
09:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11267 and previous config saved to /var/cache/conftool/dbconfig/20200521-095100-marostegui.json |
[production] |
09:28 |
<mutante> |
deneb - sudo systemctl reset-failed to clear Icinga alerts about systemd degraded state |
[production] |
09:16 |
<elukey> |
move Druid Analytics SQL in Superset to druid://an-druid1001.eqiad.wmnet:8082/druid/v2/sql/ |
[analytics] |
09:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1143 and db1091', diff saved to https://phabricator.wikimedia.org/P11266 and previous config saved to /var/cache/conftool/dbconfig/20200521-091245-marostegui.json |
[production] |
09:05 |
<elukey> |
move turnilo to an-druid1001 (beefier host) |
[analytics] |
09:01 |
<mutante> |
LDAP - added lmata to wmf group (T253277) |
[production] |
08:55 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from esams - T253196 |
[production] |
08:52 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from eqsin - T253196 |
[production] |
08:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool db1143 with minimal weight for the first time T252512', diff saved to https://phabricator.wikimedia.org/P11265 and previous config saved to /var/cache/conftool/dbconfig/20200521-084933-marostegui.json |
[production] |
08:47 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from eqiad/eqord - T253196 |
[production] |
08:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1143 to the list of s4 hosts, depooled - T252512', diff saved to https://phabricator.wikimedia.org/P11264 and previous config saved to /var/cache/conftool/dbconfig/20200521-084226-marostegui.json |
[production] |
08:34 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from dfw - T253196 |
[production] |
08:27 |
<XioNoX> |
Advertise Anycast 198.35.27.0/24 from ulsfo - T253196 |
[production] |
08:20 |
<XioNoX> |
Delete ARIN route object for 198.35.26.0/23 - T253196 |
[production] |
08:15 |
<elukey> |
roll restart of all druid historicals in the analytics cluster to pick up new settings |
[analytics] |
08:13 |
<XioNoX> |
Delete ROA for 198.35.26.0/23 - T253196 |
[production] |
08:10 |
<XioNoX> |
repool ulsfo - T253196 |
[production] |
08:03 |
<XioNoX> |
Shrink ulsfo's 198.35.26.0/23 to 198.35.26.0/24 - T253196 |
[production] |
07:29 |
<XioNoX> |
depool ulsfo - T253196 |
[production] |
07:22 |
<marostegui> |
Purge events from tendril.global_status_log older than 24h - T252331 |
[production] |
07:03 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 fully', diff saved to https://phabricator.wikimedia.org/P11263 and previous config saved to /var/cache/conftool/dbconfig/20200521-070335-jynus.json |
[production] |
06:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1091 - T252512', diff saved to https://phabricator.wikimedia.org/P11261 and previous config saved to /var/cache/conftool/dbconfig/20200521-065858-marostegui.json |
[production] |
06:28 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 with 50% weight', diff saved to https://phabricator.wikimedia.org/P11260 and previous config saved to /var/cache/conftool/dbconfig/20200521-062823-jynus.json |
[production] |
06:04 |
<vgutierrez> |
pool cp5012 - T251219 |
[production] |
05:42 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Repool es1019 with low weight', diff saved to https://phabricator.wikimedia.org/P11259 and previous config saved to /var/cache/conftool/dbconfig/20200521-054231-jynus.json |
[production] |
05:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Set enwiki as read-only=off after maintenance T251982', diff saved to https://phabricator.wikimedia.org/P11258 and previous config saved to /var/cache/conftool/dbconfig/20200521-050328-marostegui.json |
[production] |
05:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Set enwiki as read-only for maintenance T251982', diff saved to https://phabricator.wikimedia.org/P11257 and previous config saved to /var/cache/conftool/dbconfig/20200521-050029-marostegui.json |
[production] |
03:38 |
<wm-bot> |
<bd808> Hard stop/start cycle to enable --canonical |
[tools.toolviews] |
01:03 |
<krinkle@deploy1001> |
Synchronized wmf-config/mc.php: Ic9efa98312b (duration: 01m 08s) |
[production] |
00:34 |
<bd808> |
Deleting gird job log files older than 7 days directly on the NFS backend server (T248188) |
[tools.zoomviewer] |
2020-05-20
§
|
22:35 |
<bstorm_> |
created paws-k8s-worker-1/2/3/4 T211096 |
[paws] |
22:13 |
<James_F> |
Docker: Publishing new zuul-cloner image with zuul-clonemap.yaml in it T252955 |
[releng] |
22:12 |
<bstorm_> |
created paws-k8s-haproxy-1/2 with antiaffinity group T211096 |
[paws] |
21:36 |
<bstorm_> |
created paws-k8s-control-1/2/3 with appropriate sec group and server group T211096 |
[paws] |
21:18 |
<James_F> |
Docker: Publishing new npm-test images with npm-install-dev.py in them T252955 |
[releng] |
20:16 |
<herron> |
logstash1011:~# kafka-preferred-replica-election --zookeeper conf1004.eqiad.wmnet,conf1005.eqiad.wmnet,conf1006.eqiad.wmnet/kafka/logging-eqiad |
[production] |
19:27 |
<robh> |
cp5012 still offline for mem tests, "fast" testing complete without errors and extended testing in progress. system firmware was updated before testing. T251219 |
[production] |
18:59 |
<bstorm_> |
created anti-affinity group "controlplane" T211096 |
[paws] |