2020-09-18
ยง
|
10:26 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 50%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12657 and previous config saved to /var/cache/conftool/dbconfig/20200918-102638-kormat.json |
[production] |
10:16 |
<arturo> |
cloudvirt1039 libvirtd service issues were fixed with a reboot |
[admin] |
10:11 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 (re)pooling @ 25%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12656 and previous config saved to /var/cache/conftool/dbconfig/20200918-101135-kormat.json |
[production] |
09:56 |
<arturo> |
rebooting cloudvirt1039 (spare) to try to fix some weird libvirtd failure |
[admin] |
09:55 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2087:3316 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12655 and previous config saved to /var/cache/conftool/dbconfig/20200918-095554-kormat.json |
[production] |
09:55 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:55 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:50 |
<arturo> |
enabling puppet in cloudvirts and effectively merging patches from T262979 |
[admin] |
09:47 |
<twentyafterfour> |
deployed hotfix for T263063 to phab1001 |
[production] |
09:47 |
<jayme> |
deleting some random pods in kubernetes staging to rebalance load back on kubestage1001 - T262527 |
[production] |
09:46 |
<jayme> |
uncordoned kubestage1001 - T262527 |
[production] |
09:46 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 100%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12654 and previous config saved to /var/cache/conftool/dbconfig/20200918-094608-kormat.json |
[production] |
09:31 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 80%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12653 and previous config saved to /var/cache/conftool/dbconfig/20200918-093105-kormat.json |
[production] |
09:30 |
<hashar> |
deployment-snapshot01 : deleted /srv/mediawiki/php-master/cache/l10n/ and ran scap pull |
[releng] |
09:24 |
<klausman@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:22 |
<klausman@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:16 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 60%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12652 and previous config saved to /var/cache/conftool/dbconfig/20200918-091601-kormat.json |
[production] |
09:01 |
<hashar> |
deployment-snapshot01: deleted a couple of .~tmp~ directories under /srv/mediawiki/php-master/cache/l10n |
[releng] |
09:00 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 40%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12651 and previous config saved to /var/cache/conftool/dbconfig/20200918-090058-kormat.json |
[production] |
09:00 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:59 |
<arturo> |
disable puppet in all buster cloudvirts (cloudvirt[1024,1031-1039].eqiad.wmnet) to merge a patch for T263205 and T262979 |
[admin] |
08:56 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:56 |
<jayme> |
reboot kubestage1001 for clean state - T262527 |
[production] |
08:54 |
<elukey> |
change analytics-in4/in6 filters on cr1/cr2 after https://gerrit.wikimedia.org/r/628300 |
[production] |
08:50 |
<arturo> |
installing iptables from buster-bpo in cloudvirt1036 (T263205 and T262979) |
[admin] |
08:47 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:45 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 (re)pooling @ 20%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12650 and previous config saved to /var/cache/conftool/dbconfig/20200918-084554-kormat.json |
[production] |
08:43 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:43 |
<jayme> |
reboot kubestage1001 for kernel upgrade - T262527 |
[production] |
08:30 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
08:25 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
08:25 |
<jayme> |
reboot kubestage1001 for clean state testing - T262527 |
[production] |
08:22 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2124 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12648 and previous config saved to /var/cache/conftool/dbconfig/20200918-082223-kormat.json |
[production] |
08:16 |
<klausman> |
reinstalling stat1004 with Buster |
[production] |
07:17 |
<moritzm> |
installing xdg-utils security updates |
[production] |
07:14 |
<XioNoX> |
push pfw policies - T263168 |
[production] |
07:12 |
<jayme> |
draining kubestage1001 for kernel upgrade - T262527 |
[production] |
06:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool es2018, es2012 after cloning es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12647 and previous config saved to /var/cache/conftool/dbconfig/20200918-062127-marostegui.json |
[production] |
06:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db1106 after MCR changes', diff saved to https://phabricator.wikimedia.org/P12646 and previous config saved to /var/cache/conftool/dbconfig/20200918-060815-marostegui.json |
[production] |
06:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db1131 after rack move', diff saved to https://phabricator.wikimedia.org/P12645 and previous config saved to /var/cache/conftool/dbconfig/20200918-060724-marostegui.json |
[production] |
06:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool es2018, es2012 after cloning es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12644 and previous config saved to /var/cache/conftool/dbconfig/20200918-060103-marostegui.json |
[production] |
05:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool es2018, es2012 after cloning es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12643 and previous config saved to /var/cache/conftool/dbconfig/20200918-053758-marostegui.json |
[production] |
05:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add es2029 and es2030 to dbctl depooled - T261717', diff saved to https://phabricator.wikimedia.org/P12642 and previous config saved to /var/cache/conftool/dbconfig/20200918-053604-marostegui.json |
[production] |
05:26 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool es2018, es2012 after cloning es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12641 and previous config saved to /var/cache/conftool/dbconfig/20200918-052608-marostegui.json |
[production] |
05:15 |
<marostegui> |
Restart wikibugs |
[production] |
04:42 |
<wm-bot> |
<bd808> Restarting bot. Seems to have lost connection with some channels. |
[tools.bridgebot] |
01:20 |
<andrewbogott> |
repooling tools-sgeexec-0901, tools-sgeexec-0905, tools-sgeexec-0910, tools-sgeexec-0911, tools-sgeexec-0912 after flavor update |
[tools] |
01:11 |
<andrewbogott> |
depooling tools-sgeexec-0901, tools-sgeexec-0905, tools-sgeexec-0910, tools-sgeexec-0911, tools-sgeexec-0912 for flavor update |
[tools] |
01:08 |
<andrewbogott> |
repooling tools-sgeexec-0917, tools-sgeexec-0918, tools-sgeexec-0919, tools-sgeexec-0920 after flavor update |
[tools] |
01:00 |
<andrewbogott> |
depooling tools-sgeexec-0917, tools-sgeexec-0918, tools-sgeexec-0919, tools-sgeexec-0920 for flavor update |
[tools] |