1251-1300 of 10000 results (30ms)
2021-03-22 §
10:26 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
10:26 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
10:25 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
10:21 <jayme@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
10:21 <jayme@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
10:17 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
10:17 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
10:15 <elukey@deploy1002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
10:15 <elukey@deploy1002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
10:12 <elukey> run homer for cr1/cr2 eqiad and codfw to add new iBGP session for the k8s ML clusters - https://gerrit.wikimedia.org/r/c/operations/homer/public/+/661055 [production]
09:50 <reedy@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config cleanup (duration: 00m 57s) [production]
09:49 <reedy@deploy1002> Synchronized wmf-config/InitialiseSettings-labs.php: Config cleanup (duration: 00m 59s) [production]
09:48 <reedy@deploy1002> Synchronized wmf-config/CommonSettings-labs.php: Config cleanup (duration: 01m 20s) [production]
09:35 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1142 for schema change', diff saved to https://phabricator.wikimedia.org/P14971 and previous config saved to /var/cache/conftool/dbconfig/20210322-093558-marostegui.json [production]
09:15 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 100%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14970 and previous config saved to /var/cache/conftool/dbconfig/20210322-091534-root.json [production]
09:00 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 75%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14969 and previous config saved to /var/cache/conftool/dbconfig/20210322-090030-root.json [production]
08:45 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 50%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14968 and previous config saved to /var/cache/conftool/dbconfig/20210322-084527-root.json [production]
08:30 <marostegui@cumin1001> dbctl commit (dc=all): 'db1141 (re)pooling @ 25%: Slowly repool db1141', diff saved to https://phabricator.wikimedia.org/P14967 and previous config saved to /var/cache/conftool/dbconfig/20210322-083023-root.json [production]
08:13 <godog> swift eqiad-prod: less weight for ms-be[1019-1026] / more weight to ms-be106[0-3] - T272836 T268435 [production]
08:13 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1158.eqiad.wmnet with reason: REIMAGE [production]
08:11 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on db1158.eqiad.wmnet with reason: REIMAGE [production]
08:02 <jayme> build and release docker-registry.discovery.wmnet/eventrouter:0.3.0-6, docker-registry.discovery.wmnet/fluent-bit:1.5.3-3, docker-registry.discovery.wmnet/ratelimit:1.5.1-s3 [production]
08:00 <marostegui> Stop MySQL on db1085 to clone db1165 (lag will appear on s6 on wiki replicas) T258361 [production]
08:00 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1085 to clone db1165', diff saved to https://phabricator.wikimedia.org/P14965 and previous config saved to /var/cache/conftool/dbconfig/20210322-080020-marostegui.json [production]
07:51 <elukey> stop/start mariadb instances on dbstore1004 to reduce buffer pool memory settings - T273865 [production]
07:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1161 (re)pooling @ 100%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14964 and previous config saved to /var/cache/conftool/dbconfig/20210322-073747-root.json [production]
07:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1161 (re)pooling @ 75%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14963 and previous config saved to /var/cache/conftool/dbconfig/20210322-072243-root.json [production]
07:14 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1141 for schema change', diff saved to https://phabricator.wikimedia.org/P14962 and previous config saved to /var/cache/conftool/dbconfig/20210322-071430-marostegui.json [production]
07:07 <marostegui@cumin1001> dbctl commit (dc=all): 'db1161 (re)pooling @ 50%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14961 and previous config saved to /var/cache/conftool/dbconfig/20210322-070740-root.json [production]
06:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1161 (re)pooling @ 25%: Slowly repool db1161', diff saved to https://phabricator.wikimedia.org/P14960 and previous config saved to /var/cache/conftool/dbconfig/20210322-065236-root.json [production]
06:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove db1084 from dbctl T276302', diff saved to https://phabricator.wikimedia.org/P14959 and previous config saved to /var/cache/conftool/dbconfig/20210322-063732-marostegui.json [production]
06:11 <marostegui> Sanitize db1124 db2094 db1154: taywiki trvwiki mnwwiktionary [production]
04:28 <kartik@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'cxserver' for release 'staging' . [production]
2021-03-21 §
10:25 <_joe_> restarting gerrit on gerrit1001, using 45G of reserved memory [production]
09:22 <elukey> install apache2-bin-dbgsym on gerrit1001 - T277127 [production]
08:50 <qchris> Restarting apache on gerrit1001 again (all apache workers busy again) see T277127 [production]
08:18 <qchris> Restarting apache on gerrit1001 (all apache workers busy) [production]
2021-03-20 §
00:22 <tzatziki> altering emails for STei (WMF) and SGrabarczuk (WMF) [production]
2021-03-19 §
21:11 <mutante> scandium - stop apache and rerun puppet which fails after reimaging because it tries to run an nginx on port 80 which is already used by apache T268248 [production]
20:31 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on scandium.eqiad.wmnet with reason: REIMAGE [production]
20:29 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on scandium.eqiad.wmnet with reason: REIMAGE [production]
20:15 <mutante> scandium - reimaging with buster [production]
20:14 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on scandium.eqiad.wmnet with reason: reimage [production]
20:14 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on scandium.eqiad.wmnet with reason: reimage [production]
20:11 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2245.codfw.wmnet [production]
19:55 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw2245.codfw.wmnet [production]
19:53 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts mw2244.codfw.wmnet [production]
19:53 <legoktm@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host lists1002.wikimedia.org [production]
19:50 <mutante> testreduce1001 - confirmed MariaDB @@datadir is /srv/data/mysql and deleting /var/lib/mysql (T277580) [production]
19:40 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw2244.codfw.wmnet [production]