2019-10-29
§
|
10:21 |
<jynus> |
running import on m1-master, m1 replicas will lag for a whileT236406 |
[production] |
10:20 |
<oblivian@deploy1001> |
helmfile [EQIAD] Ran 'apply' command on namespace 'kube-system' for release 'rbac-deploy-clusterrole' . |
[production] |
10:19 |
<oblivian@deploy1001> |
helmfile [CODFW] Ran 'apply' command on namespace 'kube-system' for release 'rbac-deploy-clusterrole' . |
[production] |
10:15 |
<oblivian@deploy1001> |
helmfile [STAGING] Ran 'apply' command on namespace 'kube-system' for release 'rbac-deploy-clusterrole' . |
[production] |
10:07 |
<arturo> |
deleting old jessie VMs tools-proxy-03/04 T235627 |
[tools] |
10:07 |
<XioNoX> |
disable cr3-esams:et-1/0/0 (flapping) |
[production] |
09:56 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:55 |
<filippo@cumin1001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
09:55 |
<filippo@cumin1001> |
END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) |
[production] |
09:54 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:52 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:51 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:50 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:50 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:50 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:49 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:48 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:48 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:45 |
<hashar> |
Purging Docker images on CI instances |
[releng] |
09:28 |
<gehel> |
plugin upgrade on relforge - T236123 |
[production] |
09:27 |
<godog> |
reimage elastic 7 hw with Buster |
[production] |
09:27 |
<vgutierrez> |
restart ats-tls on cp5007 disabling TCP SO_LINGER - T236458 |
[production] |
08:51 |
<fdans> |
starting backfilling for per file mediarequests for 7 days from Sep 15 2015 |
[analytics] |
08:43 |
<jynus> |
shutting down db1099 T227538 |
[production] |
08:35 |
<jynus@cumin1001> |
dbctl commit (dc=all): 'Depool db1099', diff saved to https://phabricator.wikimedia.org/P9492 and previous config saved to /var/cache/conftool/dbconfig/20191029-083547-jynus.json |
[production] |
08:15 |
<XioNoX> |
push term allow_vmhost ro cr3-esams loopback4 filter - T236598 |
[production] |
08:06 |
<vgutierrez> |
restarting ats-tls on cp5007 with TCP FASTOPEN disabled - T236458 |
[production] |
07:40 |
<moritzm> |
installing php7.3 security updates |
[production] |
07:09 |
<elukey> |
roll restart java daemons on analytics1042, druid1003 and aqs1004 to pick up new openjdk upgrades |
[analytics] |
07:06 |
<elukey> |
roll restart java daemons on analytics1042, druid1003 and aqs1004 to pick up new openjdk upgrades |
[production] |
07:01 |
<_joe_> |
restart memcached on mc1024-1036, 1 hour apart, via cumin (T235188) |
[production] |
06:26 |
<_joe_> |
restart memcached on mc1023 T23518 |
[production] |
04:30 |
<wm-bot> |
<musikanimal> Updating to version 0.10.13 |
[tools.svgtranslate] |
03:35 |
<vgutierrez> |
restarting varnish-frontend on cp5008 |
[production] |
2019-10-28
§
|
23:23 |
<catrope@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Deploy Echo kask migration to officewiki for testing, part 3 (T222851) (duration: 00m 52s) |
[production] |
23:20 |
<catrope@deploy1001> |
Synchronized wmf-config/CommonSettings.php: Deploy Echo kask migration to officewiki for testing, part 2 (T222851) (duration: 00m 52s) |
[production] |
23:19 |
<catrope@deploy1001> |
Synchronized wmf-config/ProductionServices.php: Deploy Echo kask migration to officewiki for testing, part 1 (T222851) (duration: 00m 54s) |
[production] |
23:18 |
<mutante> |
re-enabling puppet on moscovium (RT) |
[production] |
22:58 |
<bd808> |
Updated to 28b15c5 (Rely on split-horizon DNS to find active proxy server) |
[tools.admin] |
22:55 |
<jeh> |
run labs-ip-alias-dump on cloudservices1003 and cloudservices1004 T235627 |
[openstack] |
22:50 |
<bd808> |
Live hacked tool-admin-web/src/Tools.php for front proxy change |
[tools.admin] |
22:02 |
<ejegg> |
re-enabled basic fundraising jobs (Queue consumers, audit processors, TY mailer) |
[production] |
20:56 |
<cdanis> |
restart memcached on mc1022 T235188 |
[production] |
20:37 |
<Jeff_Green> |
authdns update to switch fundraising db service hostname |
[production] |
20:19 |
<ejegg> |
disabled all fundraising scheduled jobs |
[production] |
19:50 |
<rlazarus> |
restarted memcached on mc1021 (T235188) |
[production] |
19:41 |
<ssastry@deploy1001> |
Finished deploy [parsoid/deploy@d932d6a]: Update parsoid to 089bf28d (duration: 02m 42s) |
[production] |
19:38 |
<ssastry@deploy1001> |
Started deploy [parsoid/deploy@d932d6a]: Update parsoid to 089bf28d |
[production] |
19:27 |
<bd808> |
Admins killed `node dist/index.js` process running on tools-sgebastion-07. Please use the job grid or kubernetes instead |
[tools.lziad] |
19:20 |
<bd808> |
Admins killed youtube-dl process running on tools-sgebastion-07 |
[tools.faebot] |