2151-2200 of 10000 results (23ms)
2021-01-18 §
11:37 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1121.eqiad.wmnet with reason: REIMAGE [production]
11:35 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1121.eqiad.wmnet with reason: REIMAGE [production]
11:33 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-fe1006.eqiad.wmnet [production]
11:28 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single for host ms-fe1006.eqiad.wmnet [production]
11:22 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1120.eqiad.wmnet with reason: REIMAGE [production]
11:18 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1120.eqiad.wmnet with reason: REIMAGE [production]
11:17 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-fe1005.eqiad.wmnet [production]
11:11 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single for host ms-fe1005.eqiad.wmnet [production]
11:10 <hashar> Restarting Gerrit main instance on gerrit1001.wikimedia.org [production]
11:08 <hashar> Restarting Gerrit replica on gerrit2001.wikimedia.org [production]
10:58 <moritzm> installing python2.7 security updates on Stretch [production]
10:50 <dcaro> re-enabling puppet on cephcloudosd2* (codfw) [admin]
10:30 <marostegui@cumin1001> dbctl commit (dc=all): 'db1074 (re)pooling @ 100%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13799 and previous config saved to /var/cache/conftool/dbconfig/20210118-102959-root.json [production]
10:14 <marostegui@cumin1001> dbctl commit (dc=all): 'db1074 (re)pooling @ 75%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13798 and previous config saved to /var/cache/conftool/dbconfig/20210118-101456-root.json [production]
10:07 <dcaro> disabling puppet on cephcloudosd2* (codfw) to do some performance tests [admin]
10:00 <_joe_> restarting pybal on lvs1016, not talking to its etcd server [production]
09:59 <marostegui@cumin1001> dbctl commit (dc=all): 'db1074 (re)pooling @ 50%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13797 and previous config saved to /var/cache/conftool/dbconfig/20210118-095952-root.json [production]
09:51 <kormat@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
09:44 <marostegui@cumin1001> dbctl commit (dc=all): 'db1074 (re)pooling @ 25%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13796 and previous config saved to /var/cache/conftool/dbconfig/20210118-094449-root.json [production]
09:25 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1074 to stop replication T272008', diff saved to https://phabricator.wikimedia.org/P13795 and previous config saved to /var/cache/conftool/dbconfig/20210118-092546-marostegui.json [production]
09:24 <kormat@cumin1001> START - Cookbook sre.ganeti.makevm [production]
09:24 <marostegui@cumin1001> dbctl commit (dc=all): 'db1106 (re)pooling @ 100%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13794 and previous config saved to /var/cache/conftool/dbconfig/20210118-092429-root.json [production]
09:20 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1105:3311 from vslow', diff saved to https://phabricator.wikimedia.org/P13793 and previous config saved to /var/cache/conftool/dbconfig/20210118-092003-marostegui.json [production]
09:13 <moritzm> installing openssl 1.1 security updates on stretch [production]
09:09 <marostegui@cumin1001> dbctl commit (dc=all): 'db1106 (re)pooling @ 75%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13791 and previous config saved to /var/cache/conftool/dbconfig/20210118-090926-root.json [production]
09:06 <kormat@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:01 <kormat@cumin1001> START - Cookbook sre.dns.netbox [production]
09:00 <dcaro> Enabling custom application 'cinder' on pool codfw1dev-cinder to get rid of health warnings [admin]
08:54 <marostegui@cumin1001> dbctl commit (dc=all): 'db1106 (re)pooling @ 50%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13790 and previous config saved to /var/cache/conftool/dbconfig/20210118-085422-root.json [production]
08:46 <kormat@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
08:42 <kormat@cumin1001> START - Cookbook sre.hosts.decommission [production]
08:39 <marostegui@cumin1001> dbctl commit (dc=all): 'db1106 (re)pooling @ 25%: After moving wikireplicas to another host', diff saved to https://phabricator.wikimedia.org/P13788 and previous config saved to /var/cache/conftool/dbconfig/20210118-083919-root.json [production]
08:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1106 to stop replication, place db1105:3311 temporarily in vslow T272008', diff saved to https://phabricator.wikimedia.org/P13787 and previous config saved to /var/cache/conftool/dbconfig/20210118-081740-marostegui.json [production]
08:15 <moritzm> installing remaining openssl 1.0 security updated on stretch [production]
08:13 <elukey> clean up old archiva debs and upload 2.2.4-3 to buster-wikimedia - T272082 [production]
08:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 100%: After restarting for kernel upgraed', diff saved to https://phabricator.wikimedia.org/P13786 and previous config saved to /var/cache/conftool/dbconfig/20210118-080122-root.json [production]
07:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 75%: After restarting for kernel upgraed', diff saved to https://phabricator.wikimedia.org/P13785 and previous config saved to /var/cache/conftool/dbconfig/20210118-074618-root.json [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 50%: After restarting for kernel upgraed', diff saved to https://phabricator.wikimedia.org/P13784 and previous config saved to /var/cache/conftool/dbconfig/20210118-073115-root.json [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1138 (re)pooling @ 25%: After restarting for kernel upgraed', diff saved to https://phabricator.wikimedia.org/P13783 and previous config saved to /var/cache/conftool/dbconfig/20210118-071611-root.json [production]
06:53 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1138 for kernel upgrade', diff saved to https://phabricator.wikimedia.org/P13782 and previous config saved to /var/cache/conftool/dbconfig/20210118-065312-marostegui.json [production]
06:35 <marostegui> Reboot dbproxy2001, dbproxy2002, dbproxy2003 for kernel upgrade [production]
06:22 <marostegui> Reboot db1154 and db1155 for kernel upgrade [production]
2021-01-17 §
16:53 <arturo> icinga downtime labstore1004 /srv/tools space check for 3 days (T272247) [admin]
03:44 <James_F> Zuul: [mediawiki/core] Add composer (not vendor) experimental PHP 8.0 job T248925 [releng]
00:33 <Operator873> CVNBot6-7 and CVNBot9-10 restarted [cvn]
2021-01-16 §
23:24 <James_F> Docker: Building cascade of new php-ast image T271428 [releng]
19:36 <wm-bot> <lucaswerkmeister> deployed 20cf18c1bc (code cleanups) [tools.ranker]
17:48 <wm-bot> <lucaswerkmeister> deployed 8e2c9a4b34 (cleanup) [tools.ranker]
17:48 <wm-bot> <lucaswerkmeister> deployed 97774ca30c (initial deployment) about ten minutes ago [tools.ranker]
12:18 <elukey> elukey@cumin1001:~$ sudo cumin 'A:mw-app-canary and A:mw-eqiad' 'run-puppet-agent' -b 10 - T272215 [production]