2020-01-13
§
|
06:36 |
<marostegui> |
Deploy schema change on db1112 with replication (lag will appear on s3 on labs) - T234052 |
[production] |
06:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1112', diff saved to https://phabricator.wikimedia.org/P10131 and previous config saved to /var/cache/conftool/dbconfig/20200113-063513-marostegui.json |
[production] |
06:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1081 for compression T232446', diff saved to https://phabricator.wikimedia.org/P10130 and previous config saved to /var/cache/conftool/dbconfig/20200113-062007-marostegui.json |
[production] |
06:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1084', diff saved to https://phabricator.wikimedia.org/P10129 and previous config saved to /var/cache/conftool/dbconfig/20200113-061835-marostegui.json |
[production] |
06:14 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1084 after compression', diff saved to https://phabricator.wikimedia.org/P10128 and previous config saved to /var/cache/conftool/dbconfig/20200113-061434-marostegui.json |
[production] |
06:11 |
<marostegui> |
Deploy schema change on s1 master (db1083) - T234052 |
[production] |
06:11 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool es1013', diff saved to https://phabricator.wikimedia.org/P10127 and previous config saved to /var/cache/conftool/dbconfig/20200113-061106-marostegui.json |
[production] |
06:10 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db1075 T234052', diff saved to https://phabricator.wikimedia.org/P10126 and previous config saved to /var/cache/conftool/dbconfig/20200113-061025-marostegui.json |
[production] |
06:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool es1013', diff saved to https://phabricator.wikimedia.org/P10125 and previous config saved to /var/cache/conftool/dbconfig/20200113-060841-marostegui.json |
[production] |
06:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1084 after compression', diff saved to https://phabricator.wikimedia.org/P10124 and previous config saved to /var/cache/conftool/dbconfig/20200113-060112-marostegui.json |
[production] |
06:00 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1075 T234052', diff saved to https://phabricator.wikimedia.org/P10123 and previous config saved to /var/cache/conftool/dbconfig/20200113-060012-marostegui.json |
[production] |
05:58 |
<marostegui> |
Remove partitions from db1105:3312 - T239453 |
[production] |
05:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1105:3312 - T239453', diff saved to https://phabricator.wikimedia.org/P10122 and previous config saved to /var/cache/conftool/dbconfig/20200113-055811-marostegui.json |
[production] |
05:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db2091:3312', diff saved to https://phabricator.wikimedia.org/P10121 and previous config saved to /var/cache/conftool/dbconfig/20200113-055554-marostegui.json |
[production] |
05:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1084 after compression', diff saved to https://phabricator.wikimedia.org/P10120 and previous config saved to /var/cache/conftool/dbconfig/20200113-055315-marostegui.json |
[production] |
05:51 |
<marostegui> |
Deploy schema change on x1 master on flowdb with replication - T241387 |
[production] |
02:02 |
<andrewbogott> |
restarted mariadb on cloudservices1003, cloudservices1004, cloudservices2001-dev, clouddb2001-dev for T239791 |
[production] |
00:58 |
<jiji@cumin1001> |
conftool action : set/pooled=yes; selector: name=cp3061.esams.wmnet |
[production] |
00:53 |
<jiji@cumin1001> |
conftool action : set/pooled=yes; selector: name=cp3065.esams.wmnet |
[production] |
00:23 |
<jiji@cumin1001> |
conftool action : set/pooled=no; selector: name=cp3061.esams.wmnet |
[production] |
00:23 |
<jiji@cumin1001> |
conftool action : set/pooled=no; selector: name=cp3065.esams.wmnet |
[production] |
00:22 |
<effie> |
depool and restart cp3065 cp3061 - T238305 |
[production] |
00:21 |
<effie> |
depool and restart cp3065 cp3061 |
[production] |
2020-01-10
§
|
22:33 |
<mutante> |
ms-be1026 sudo systemctl reset-failed (failed Session 372989 of user debmonitor) |
[production] |
20:45 |
<jeh> |
cloudcontrol200[13]-dev schedule downtime until Feb 28 2020 on systemd service check T242462 |
[production] |
20:29 |
<jeh> |
cloudmetrics100[12] schedule downtime until Feb 28 2020 on prometheus check T242460 |
[production] |
20:03 |
<urandom> |
drop legacy Parsoid/JS storage keyspaces, production env -- T242344 |
[production] |
19:56 |
<otto@deploy1001> |
helmfile [EQIAD] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . |
[production] |
19:54 |
<otto@deploy1001> |
helmfile [CODFW] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . |
[production] |
19:52 |
<otto@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-main' for release 'main' . |
[production] |
19:51 |
<otto@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics' for release 'analytics' . |
[production] |
19:48 |
<mutante> |
LDAP - add Zbyszko Papierski to "wmf" group (T242341) |
[production] |
19:47 |
<mutante> |
LDAP - add Hugh Nowlan to "wmf" group (T242309) |
[production] |
19:42 |
<dcausse> |
restarting blazegraph on wdqs1005 |
[production] |
19:40 |
<ebernhardson> |
restart mjolnir-kafka-bulk-daemon across eqiad and codfw search clusters |
[production] |
19:40 |
<ebernhardson@deploy1001> |
Finished deploy [search/mjolnir/deploy@e141941]: repair model upload in bulk daemon (duration: 05m 02s) |
[production] |
19:35 |
<ebernhardson@deploy1001> |
Started deploy [search/mjolnir/deploy@e141941]: repair model upload in bulk daemon |
[production] |
19:13 |
<otto@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . |
[production] |
18:53 |
<mutante> |
welcome new (restbase) service deployer Clara Andrew-Wani (T242152) |
[production] |
18:29 |
<bd808> |
Restarted zuul on contint1001; no logs since 2020-01-10 17:55:28,452 |
[production] |
11:48 |
<moritzm> |
stop/mask nginx on hassium/hassaleh T224567 |
[production] |
10:56 |
<akosiaris> |
repool mathoid codfw for testing canary support in the mathoid helm chart |
[production] |