2201-2250 of 10000 results (17ms)
2020-04-21 §
23:25 <mstyles@deploy1001> Started deploy [wdqs/wdqs@4e0d55f]: v0.3.23 [production]
23:19 <maryum> begin deploy of WDQS v 0.3.23 on deploy1001 [production]
23:17 <milimetric> restarted webrequest bundle, babysitting that first before going on [analytics]
23:06 <bstorm_> repooled tools-k8s-worker-38/52, tools-sgewebgrid-lighttpd-0918/9 and tools-sgeexec-0901 T250869 [tools]
23:00 <milimetric> forgot a small jar version update, finished deploying now [analytics]
22:41 <eileen> process-control config revision is 6294adfbaa [production]
22:24 <milimetric@deploy1001> Finished deploy [analytics/refinery@64c5ec4]: Analytics: tiny follow-up on weekly train [analytics/refinery@64c5ec4] (duration: 37m 05s) [production]
22:12 <andrewbogott> moving cloudvirt1004 out of the 'standard' aggregate and into the 'maintenance' aggregate [admin]
22:09 <bstorm_> depooling tools-sgewebgrid-lighttpd-0918/9 and tools-sgeexec-0901 T250869 [tools]
22:02 <bstorm_> draining tools-k8s-worker-38 and tools-k8s-worker-52 as they are on the crashed host T250869 [tools]
21:56 <andrewbogott> rebooting cloudvirt1004, total raid controller failure [production]
21:50 <urandom> bootstrapping restbase2014-c — T250050 [production]
21:46 <milimetric@deploy1001> Started deploy [analytics/refinery@64c5ec4]: Analytics: tiny follow-up on weekly train [analytics/refinery@64c5ec4] [production]
21:38 <milimetric> deployed twice because analytics1030 failed with "OSError {}" but seems ok after the second deploy [analytics]
21:38 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] try 2 (analytics1030 failed with OSError the first time) (duration: 00m 13s) [production]
21:37 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] try 2 (analytics1030 failed with OSError the first time) [production]
21:21 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] (duration: 16m 19s) [production]
21:07 <wm-bot> <lucaswerkmeister> deployed 6634452b4c (increase uWSGI buffer) [tools.lexeme-forms]
21:05 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db]: Regular Analytics weekly train deploy [analytics/refinery@35781db] [production]
21:05 <milimetric@deploy1001> Finished deploy [analytics/refinery@35781db] (thin): Regular Analytics weekly train deploy THIN [analytics/refinery@35781db] (duration: 00m 08s) [production]
21:05 <milimetric@deploy1001> Started deploy [analytics/refinery@35781db] (thin): Regular Analytics weekly train deploy THIN [analytics/refinery@35781db] [production]
19:09 <rzl> mcrouter certs renewed on puppetmaster1001 (again); puppet re-enabled on mcrouter hosts and will update certs naturally over the next 30m T248093 [production]
19:02 <urandom> bootstrapping restbase2014-b — T250050 [production]
18:28 <hoo> Updated the Wikidata property suggester with data from the 2020-04-06 JSON dump and applied the T132839 workarounds [production]
18:19 <rzl> disabling puppet on all mcrouter hosts for cert renewal T248093 [production]
17:19 <pt1979@cumin2001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
17:16 <pt1979@cumin2001> START - Cookbook sre.hosts.downtime [production]
16:49 <urandom> bootstrapping restbase2014-a — T250050 [production]
16:01 <jeh> restart cloudceph mon and osd services for openssl upgrades [admin]
15:40 <cmjohnson1> replacing mgmt switch on a6-eqiad T250652 [production]
15:38 <hashar> CI is back, patches would need to be rechecked by commenting "recheck" in Gerrit. [production]
15:32 <hashar> Restarting Gerrit T250820 T246973 [production]
15:26 <hashar> CI / Zuul does not get any events for some reason :/ [production]
15:05 <Krinkle> install 'qemu-system-x86' package on integration-agent-qemu-1001 [releng]
14:59 <volans@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:59 <volans@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:51 <hashar> contint2001: manually dropping /var/lib/docker (we now use /srv/docker ) [production]
14:48 <jbond42> restart haproxy on dns-auth [production]
14:48 <hashar> restarting docker on contint2001 [production]
14:48 <Krinkle> Creating integration-agent-qemu-1001 to experiment with VM-based CI jobs – T250808 [releng]
14:47 <volker-e@deploy1001> Finished deploy [design/style-guide@d101234]: Deploy design/style-guide: (duration: 00m 09s) [production]
14:47 <volker-e@deploy1001> Started deploy [design/style-guide@d101234]: Deploy design/style-guide: [production]
14:45 <jbond42> puppet enabled again [production]
14:40 <moritzm> restarting apache on miscweb [production]
14:37 <moritzm> restarting apache on netbox1001 [production]
14:36 <jbond42> disable puppet fleet wide to restart puppemaster [production]
14:28 <moritzm> installing OpenSSL security updates [production]
14:27 <elukey> add motd to notebook100[3,4] to alert about host deprecation (in favor of stat100x) [analytics]
14:17 <vgutierrez> rolling upgrade of ats to version 8.0.7-1wm1 [production]
14:16 <moritzm> installing OpenSSL updates on caches [production]