2001-2050 of 10000 results (30ms)
2020-06-22 §
05:33 <marostegui@cumin2001> dbctl commit (dc=all): 'Depool db1118 for reimage and InnoDB compression', diff saved to https://phabricator.wikimedia.org/P11617 and previous config saved to /var/cache/conftool/dbconfig/20200622-053334-marostegui.json [production]
05:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1134', diff saved to https://phabricator.wikimedia.org/P11616 and previous config saved to /var/cache/conftool/dbconfig/20200622-053104-marostegui.json [production]
05:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1134', diff saved to https://phabricator.wikimedia.org/P11615 and previous config saved to /var/cache/conftool/dbconfig/20200622-051730-marostegui.json [production]
05:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1134', diff saved to https://phabricator.wikimedia.org/P11614 and previous config saved to /var/cache/conftool/dbconfig/20200622-051720-marostegui.json [production]
05:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1134', diff saved to https://phabricator.wikimedia.org/P11613 and previous config saved to /var/cache/conftool/dbconfig/20200622-050259-marostegui.json [production]
04:50 <marostegui> Deploy schema change on s3 primary master with a big sleep between wikis - T250066 [production]
04:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1134', diff saved to https://phabricator.wikimedia.org/P11612 and previous config saved to /var/cache/conftool/dbconfig/20200622-044853-marostegui.json [production]
2020-06-20 §
22:56 <cdanis@cumin2001> dbctl commit (dc=all): 'db1088 seems to have crashed', diff saved to https://phabricator.wikimedia.org/P11611 and previous config saved to /var/cache/conftool/dbconfig/20200620-225624-cdanis.json [production]
07:42 <elukey> powercycle an-worker1093 - bug soft lock up CPU showed in mgmt console [production]
07:36 <elukey> powercycle an-worker1091 - bug soft lock up CPU showed in mgmt console [production]
2020-06-19 §
18:10 <otto@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Bump eventlogging_Test schema version to 1.1.0 to pick up client_dt - T238230 (duration: 00m 59s) [production]
16:07 <mutante> ganeti4003 - rebooting install4001 - trying to bootstrap OS install from install2003 [production]
15:47 <dzahn@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
15:28 <godog> roll-restart kibana to apply new settings [production]
13:01 <moritzm> installing cups security updates (client side libs/tools) [production]
12:31 <qchris> Disabling puppet on gerrit1002 (test instance) to do some more testing [production]
12:14 <godog> delete march indices from logstash 5 eqiad to free up space [production]
12:12 <marostegui@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
12:10 <marostegui@cumin2001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
12:08 <marostegui@cumin2001> START - Cookbook sre.hosts.downtime [production]
12:07 <marostegui@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
12:06 <marostegui@cumin2001> START - Cookbook sre.hosts.downtime [production]
12:05 <marostegui@cumin2001> START - Cookbook sre.hosts.downtime [production]
11:39 <marostegui> Reimage db2116 db2119 db2130 [production]
10:55 <moritzm> installing mesa security updates [production]
10:49 <godog> close april logstash indices on logstash 5 eqiad [production]
10:45 <moritzm> installing tomcat8 security updates [production]
10:38 <jayme> imported chartmuseum_0.12.0-1 to buster-wikimedia [production]
10:24 <marostegui@cumin2001> dbctl commit (dc=all): 'Repool db1093', diff saved to https://phabricator.wikimedia.org/P11604 and previous config saved to /var/cache/conftool/dbconfig/20200619-102447-marostegui.json [production]
10:21 <godog> start closing logstash indices for 2020.03 in elastic 5 eqiad [production]
09:22 <godog> restart elasticsearch on logstash1010 [production]
09:14 <apergos> rsync from dumpsdata1003 as root to labstore1007 of dumps output files to catch up, with --bwlimit=160000 up from 80000 [production]
08:45 <volans> backup netbox and run one-time script to reserve first IPs on all infra prefixes on Netbox - T233183 [production]
08:45 <godog> roll restart elasticsearch_5@production-logstash-eqiad [production]
08:26 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
08:21 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
08:18 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
08:15 <godog> roll-restart logstash elk5 for "JVM GC Old generation-s runs" alert [production]
08:12 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
08:00 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
07:59 <marostegui@cumin2001> dbctl commit (dc=all): 'Depool db1093', diff saved to https://phabricator.wikimedia.org/P11601 and previous config saved to /var/cache/conftool/dbconfig/20200619-075907-marostegui.json [production]
07:54 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
07:52 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
07:47 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
07:47 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
07:44 <marostegui@cumin2001> dbctl commit (dc=all): 'Repool db1098:3316', diff saved to https://phabricator.wikimedia.org/P11600 and previous config saved to /var/cache/conftool/dbconfig/20200619-074420-marostegui.json [production]
07:39 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
07:28 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
07:23 <jmm@cumin2001> START - Cookbook sre.hosts.reboot-single [production]
07:22 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]