2751-2800 of 10000 results (27ms)
2020-04-24 §
19:41 <Amir1> applying T114117 on labswiki (wikitech) [production]
18:58 <shdubsh> restart elasticsearch on logstash2021 [production]
18:50 <shdubsh> restart elasticsearch on logstash2020 [production]
15:12 <cdanis@cumin1001> conftool action : set/pooled=false; selector: dnsdisc=wdqs,name=eqiad [production]
15:08 <addshore> depool and restart wdqs1006 to catch up with lag after deadlock T242453 [production]
11:13 <Amir1> apply T250071 on s10 (labswiki) [production]
2020-04-23 §
22:06 <Urbanecm> Perform timeouting rename at enwiki Wikipedia talk:Introduction --> Wikipedia talk:Introduction (historical) using moveBatch.php ([[:meta:Special:Diff/20009402|request]]) [production]
18:38 <ejegg> updated payments-wiki from 1640f5e21e to 45bf1734e0 [production]
2020-04-22 §
08:55 <Urbanecm> Move User:Wikipedia:Introduction (historical) --> Wikipedia:Introduction (historical) at enwiki using moveBatch.php, on-wiki interface was time-outing [production]
05:50 <elukey@deploy1001> Finished deploy [analytics/refinery@30facc4]: Test of new scap settings (duration: 04m 42s) [production]
05:45 <elukey@deploy1001> Started deploy [analytics/refinery@30facc4]: Test of new scap settings [production]
05:25 <elukey@deploy1001> deploy aborted: log (duration: 00m 02s) [production]
05:24 <elukey@deploy1001> Started deploy [analytics/refinery@30facc4]: log [production]
01:55 <milimetric@deploy1001> Finished deploy [analytics/refinery@30facc4]: Analytics: another follow-up on the train, jar version bump (take 2, analytics1030 keeps failing) (duration: 00m 42s) [production]
01:54 <milimetric@deploy1001> Started deploy [analytics/refinery@30facc4]: Analytics: another follow-up on the train, jar version bump (take 2, analytics1030 keeps failing) [production]
01:54 <milimetric@deploy1001> Finished deploy [analytics/refinery@30facc4]: Analytics: another follow-up on the train, jar version bump (duration: 02m 54s) [production]
01:51 <milimetric@deploy1001> Started deploy [analytics/refinery@30facc4]: Analytics: another follow-up on the train, jar version bump [production]
01:51 <milimetric@deploy1001> deploy aborted: Analytics: another follow-up on the train, jar version bump (duration: 04m 08s) [production]
01:46 <milimetric@deploy1001> Started deploy [analytics/refinery@30facc4]: Analytics: another follow-up on the train, jar version bump [production]
01:43 <reedy@deploy1001> Synchronized wmf-config/CommonSettings.php: T209749 (duration: 01m 01s) [production]
2020-04-21 §
23:41 <maryum> deploy complete for wdqs v0.3.23 [production]
23:36 <mstyles@deploy1001> Finished deploy [wdqs/wdqs@4e0d55f]: v0.3.23 (duration: 11m 35s) [production]
23:25 <mstyles@deploy1001> Started deploy [wdqs/wdqs@4e0d55f]: v0.3.23 [production]
23:19 <maryum> begin deploy of WDQS v 0.3.23 on deploy1001 [production]
22:41 <eileen> process-control config revision is 6294adfbaa [production]
22:24 <milimetric@deploy1001> Finished deploy [analytics/refinery@64c5ec4]: Analytics: tiny follow-up on weekly train [analytics/refinery@64c5ec4] (duration: 37m 05s) [production]
21:56 <andrewbogott> rebooting cloudvirt1004, total raid controller failure [production]
21:50 <urandom> bootstrapping restbase2014-c — T250050 [production]
21:46 <milimetric@deploy1001> Started deploy [analytics/refinery@64c5ec4]: Analytics: tiny follow-up on weekly train [analytics/refinery@64c5ec4] [production]
21:38 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] try 2 (analytics1030 failed with OSError the first time) (duration: 00m 13s) [production]
21:37 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] try 2 (analytics1030 failed with OSError the first time) [production]
21:21 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] (duration: 16m 19s) [production]
21:05 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] [production]
21:05 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db] (thin): Regular Analytics weekly train deploy THIN [analytics/refinery@35781db] (duration: 00m 08s) [production]
21:05 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db] (thin): Regular Analytics weekly train deploy THIN [analytics/refinery@35781db] [production]
19:09 <rzl> mcrouter certs renewed on puppetmaster1001 (again); puppet re-enabled on mcrouter hosts and will update certs naturally over the next 30m T248093 [production]
19:02 <urandom> bootstrapping restbase2014-b — T250050 [production]
18:28 <hoo> Updated the Wikidata property suggester with data from the 2020-04-06 JSON dump and applied the T132839 workarounds [production]
18:19 <rzl> disabling puppet on all mcrouter hosts for cert renewal T248093 [production]
17:19 <pt1979@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
17:16 <pt1979@cumin2001> START - Cookbook sre.hosts.downtime [production]
16:49 <urandom> bootstrapping restbase2014-a — T250050 [production]
15:40 <cmjohnson1> replacing mgmt switch on a6-eqiad T250652 [production]
15:38 <hashar> CI is back, patches would need to be rechecked by commenting "recheck" in Gerrit. [production]
15:32 <hashar> Restarting Gerrit T250820 T246973 [production]
15:26 <hashar> CI / Zuul does not get any events for some reason :/ [production]
14:59 <volans@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:59 <volans@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:51 <hashar> contint2001: manually dropping /var/lib/docker (we now use /srv/docker ) [production]
14:48 <jbond42> restart haproxy on dns-auth [production]