5251-5300 of 10000 results (77ms)
2019-10-14 §
07:01 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
06:03 <marostegui@deploy1001> Synchronized wmf-config/db-codfw.php: Remove db2068 from config T235399 (duration: 00m 51s) [production]
06:02 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Remove db2068 from config T235399 (duration: 00m 53s) [production]
05:47 <marostegui> Remove db2068 from tendril and zarcillo T235399 [production]
04:56 <marostegui> Depool labsdb1009 for on-site maintenance - T233273 [production]
04:56 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1101:3317 for schema change T233625', diff saved to https://phabricator.wikimedia.org/P9318 and previous config saved to /var/cache/conftool/dbconfig/20191014-045629-marostegui.json [production]
2019-10-13 §
00:52 <krinkle@deploy1001> Synchronized wmf-config/CommonSettings.php: ec77b1b515940c73 (duration: 00m 55s) [production]
2019-10-12 §
23:21 <krinkle@deploy1001> Synchronized wmf-config/profiler.php: bfa8bb69c1f, T231564 (duration: 00m 51s) [production]
21:07 <krinkle@deploy1001> Synchronized php-1.35.0-wmf.1/includes/resourceloader/ResourceLoaderStartUpModule.php: 8c6baeae2 (duration: 00m 53s) [production]
20:57 <Urbanecm> Reset user email of User:Gardini (T235318) [production]
18:38 <_joe_> deleting zotero pods with excessive memory usage in eqiad [production]
16:16 <reedy@deploy1001> Synchronized php-1.35.0-wmf.1/includes/api/ApiQueryBase.php: T235334 (duration: 00m 51s) [production]
16:15 <reedy@deploy1001> Synchronized php-1.35.0-wmf.1/includes/api/ApiQueryBacklinksprop.php: T235334 (duration: 00m 56s) [production]
04:37 <krinkle@deploy1001> Synchronized wmf-config/profiler.php: 29d846938c898dd (duration: 00m 57s) [production]
2019-10-11 §
15:39 <AndyRussG> updated fruec from 18d89675d0 to 1e6a6ee2de [production]
13:57 <moritzm> rebooting cloudbackup2001 [production]
13:57 <jmm@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
13:57 <jmm@cumin2001> START - Cookbook sre.hosts.downtime [production]
13:01 <moritzm> installing 4.9.189 Linux update from last stretch point releases (no reboots, deploying the package only at this point) [production]
12:48 <XioNoX> disable SIP ALG on pfw3-eqiad - T235150 [production]
12:47 <XioNoX> disable SIP ALG on pfw3-codfw - T235150 [production]
12:45 <moritzm> installing libxslt security updates [production]
12:35 <moritzm> installin zsh updates from stretch point release [production]
12:33 <moritzm> installing gsoap security updates on stretch [production]
12:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Repool db1098:3317 after schema change T233625', diff saved to https://phabricator.wikimedia.org/P9314 and previous config saved to /var/cache/conftool/dbconfig/20191011-123159-marostegui.json [production]
12:31 <moritzm> installing libcaca security updates on stretch [production]
12:25 <XioNoX> push firewall policies to pfw3-eqiad - T235074 [production]
12:24 <XioNoX> push firewall policies to pfw3-codfw - T235074 [production]
11:51 <moritzm> installing unzip security updates on stretch [production]
11:08 <moritzm> upgrading debdeploy to 0.0.99.11 [production]
10:18 <moritzm> imported debdeploy 0.0.99.11 for jessie/stretch/buster-wikimedia [production]
10:11 <hashar> Restarting Gerrit # T224448 [production]
10:02 <hashar> gerrit: killed a stall SendEmail thread that was holding a lock [production]
08:34 <moritzm> remove kafka2001-2003 from debmonitor DB (T235125) [production]
08:32 <moritzm> remove kafka1001-1003 from debmonitor DB (T235125) [production]
08:30 <jmm@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:28 <jmm@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:04 <moritzm> reimaging labpuppetmaster1002 (spare) for some tests related to microcode loading [production]
07:32 <XioNoX> rollback two previous HE peering deactivate [production]
07:30 <XioNoX> deactivate HE peering on cr2-eqord for packet loss [production]
07:28 <XioNoX> deactivate HE peering on cr1-eqiad for packet loss [production]
06:13 <marostegui> Compress tables on db2085:3318 - T232446 [production]
06:08 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2085:3318 for compression - T232446', diff saved to https://phabricator.wikimedia.org/P9311 and previous config saved to /var/cache/conftool/dbconfig/20191011-060814-marostegui.json [production]
05:27 <papaul> rebooting an-conf1001 for serial troubleshooting [production]
05:13 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
05:13 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
04:54 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1098:3317 for schema change T233625', diff saved to https://phabricator.wikimedia.org/P9310 and previous config saved to /var/cache/conftool/dbconfig/20191011-045409-marostegui.json [production]
02:14 <mutante> gerrit - "manually" starting replication via ssh command [production]
02:13 <mutante> gerrit - restart service to ensure last config change is picked up [production]
02:10 <mutante> gerrit1001 - attempt to manually start replication to github [production]