2401-2450 of 10000 results (34ms)
2016-10-19 §
08:12 <moritzm> installing quagga security updates [production]
07:31 <_joe_> disabled profiling on mw1189, hhvm keeps crashing [production]
06:50 <_joe_> installing jemalloc with memory profiling enabled on mw1189 [production]
2016-10-18 §
23:04 <Dereckson> This full scap pulled three changes of the EU SWAT: [[gerrit:316069] TimedMediaHandler, [[gerrit:316585]] MobileFrontend, [[gerrit:315901]] ULS [production]
23:03 <demon@mira> Finished scap: bringing full cluster back into sync (duration: 25m 13s) [production]
22:38 <demon@mira> Started scap: bringing full cluster back into sync [production]
22:28 <demon@mira> Synchronized README: Bringing co-masters back in sync (duration: 13m 10s) [production]
21:37 <mutante> added Dpatrick to WMF LDAP group [production]
18:32 <dereckson@mira> Synchronized wmf-config/LabsServices.php: Elastic@deployment-prep: Remove deployment-elastic08 from the cluster (no-op in prod, labs only) (duration: 00m 47s) [production]
18:30 <dereckson@mira> Synchronized wmf-config/CirrusSearch-labs.php: Elastic@deployment-prep: force the number of replicas to 1 max (no-op in prod, labs only) (duration: 01m 18s) [production]
17:55 <dcausse> warming up elastic@codfw from wasat.codfw.wmnet [production]
17:34 <jynus> stopping mysql, cloning db1064->db1053; upgrading [production]
17:01 <bblack> upgrading nginx on cache_maps - T144523 [production]
16:57 <ejegg> updated payments-wiki from b4ad60e739b9dbb97f08a3623db961a74682422a to 27b464fd4383647fc2e7f0a613f290d6edccd22f [production]
15:47 <godog> eqiad-prod: ms-be1022 to weight 3000 T136631 [production]
15:16 <andrewbogott> upgrading puppetmaster on labtestcontrol2001 to trusty/3.8.5 [production]
15:06 <bblack> upgrading nginx on all remaining cache_misc (eqiad, esams) - T144523 [production]
14:54 <bblack> upgrading nginx on all cache_misc @ codfw - T144523 [production]
14:54 <chasemp> rsync tools from labstore1001 to labstore1004 [production]
14:43 <bblack> upgrading nginx on all cache_misc @ ulsfo - T144523 [production]
14:40 <marostegui> Shutting down es2015 for hardware maintenance - T147769 [production]
14:21 <bblack> upgrading nginx on cp4001 (cache_misc ulsfo) as prod canary [production]
14:18 <bblack> uploading nginx-1.11.4+wmf3 to carbon jessie-wikimedia - T144523 [production]
13:58 <jynus> restarting and upgrading db2049 and es2019 to test new config [production]
13:53 <jynus> applying new init.d script on all mariadb 10 servers [production]
12:52 <elukey> mw1169 back in service after reimage (MW Jobrunner) [production]
11:55 <elukey> removed /etc/mysql/conf.d/research-client.cnf from stat1002 (root:root perms, not supposed to be there but only on stat1003) [production]
11:37 <elukey> reimaging mw1169 to Debian Jessie (MW Jobrunner) [production]
10:40 <elukey> mw1168.eqiad.wmnet back in service after reimage (MW Jobrunner) [production]
09:28 <elukey> reimaging mw1168 to Debian Jessie (MW Jobrunner) [production]
09:25 <elukey> varnishkafka restarting in upload/misc/maps with new settings (https://gerrit.wikimedia.org/r/316306) [production]
09:18 <gehel> upgrade nodejs to 4.6.0 on maps2* servers [production]
08:56 <moritzm> reimaging tin to jessie [production]
08:53 <marostegui> Deploying ALTER table on S4 commonswiki (db1064 — last host) - T147305 [production]
08:42 <jynus> clone db1052 -> db1053, will perform maintenance (db restarts, reboots on both) at the same time [production]
07:57 <marostegui@mira> Synchronized wmf-config/db-eqiad.php: Depool db1064 as it needs an ALTER table and pool db1068 temporarily to serve vslow and dump service - T147305 (duration: 02m 53s) [production]
03:19 <mutante> restarted grrrit-wm [production]
03:18 <mutante> gerrit has logs now in /var/log/gerrit/ [production]
03:15 <mutante> restarting gerrit for logging config change [production]
02:37 <l10nupdate@tin> ResourceLoader cache refresh completed at Tue Oct 18 02:37:01 UTC 2016 (duration 5m 49s) [production]
02:31 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.22) (duration: 10m 20s) [production]
00:48 <bblack> restarting API hhvms with >40% mem usage via salt every 10 minutes in a loop from here forward. screen session on neodymium, named api-hhvm-restarts [production]
00:39 <mutante> restarted hhvm on mw1281 (was at 47.7% usage) [production]
00:31 <bblack> restarting hhvm on API nodes where it's using >30% mem [production]
00:22 <bblack> restarting hhvm on *API* nodes where it's using >50% mem [production]
00:22 <bblack> restarting hhvm on nodes where it's using >50% mem [production]
00:05 <mutante> restarted hhvm on mw1194,mw1197,mw1198 [production]
2016-10-17 §
23:27 <Pchelolo> running import deletions script on restbase1007 [production]
22:26 <mutante> restarted gerrit on cobalt [production]
22:07 <Pchelolo> running restriction import script on restbase1007 [production]