6651-6700 of 10000 results (74ms)
2019-06-03 §
05:05 <marostegui> Remove db2037 from tendril and zarcillo T224720 [production]
05:04 <marostegui> Stop MySQL on db2037 for decommission T224720 [production]
04:56 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool es1019 T213422 (duration: 00m 51s) [production]
2019-06-02 §
20:28 <onimisionipe> pooled wdqs1007. It caught up on lag [production]
15:24 <onimisionipe> depooled wdqs1007 to catch up on lags [production]
15:22 <onimisionipe> depool wdqs internal cluster to allow them catch up on lags. depool one at a time [production]
03:09 <andrewbogott> restarting pdns-recursor on cloudservices 1003 and 1004 (but not at the same time) [production]
2019-06-01 §
22:49 <krinkle@deploy1001> Synchronized php-1.34.0-wmf.7/extensions/3D/modules/mmv.3d.js: T224812 / bd4fbfddbe1a0 (duration: 01m 07s) [production]
2019-05-31 §
21:47 <aaron@deploy1001> Synchronized wmf-config/db-eqiad.php: Set "secret" field in $wgLBFactoryConf for ChronologyProtector HMACs (duration: 00m 47s) [production]
21:46 <aaron@deploy1001> Synchronized wmf-config/db-codfw.php: Set "secret" field in $wgLBFactoryConf for ChronologyProtector HMACs (duration: 00m 50s) [production]
21:10 <bblack> cp3034: repool - T222937 [production]
20:04 <bblack> cp3034: depool for reimage - T222937 [production]
18:44 <marostegui> Start MySQL on es1019 - T213422 [production]
18:34 <jgleeson> payments-wiki updated from a76658f0a3 to c6c7bbf71e [production]
17:29 <andrewbogott> added jeh to the 'ops' group in ldap [production]
16:20 <ariel@deploy1001> Finished deploy [dumps/dumps@fd6100a]: remove orderrevs config option, unneeded now (duration: 00m 03s) [production]
16:20 <ariel@deploy1001> Started deploy [dumps/dumps@fd6100a]: remove orderrevs config option, unneeded now [production]
15:05 <bblack> cp3039: restart varnish-be for mbox lag (likely induced by 3049's depool for ATS conversion!) [production]
15:00 <Krinkle> krinkle@deploy1001: pulling down 6f91b41 for php-1.34-wmf.7/extensions/ORES (without deploy), commit seems test-only [production]
14:59 <Krinkle> krinkle@deploy1001: git status in php-1.34-wmf.7/ is dirty (extensions/ORES) [production]
14:52 <bblack> pool cp3049 back into service - T222937 [production]
14:32 <onimisionipe> depool maps2004 (again) - T224395 [production]
14:32 <elukey> powercycle notebook1003 - host stuck due to user processes, no ssh available, OOM didn't trigger [production]
14:20 <_joe_> rolling restart of php-fpm across production to pick up the shorter revalidate frequency for T224491 [production]
14:10 <bblack> reboot cp3049 - T222937 [production]
13:16 <bblack> depool cp3049 for reimage - T222937 [production]
11:46 <jynus> stop and upgrade db2084 [production]
11:09 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1099 after maintenance (duration: 00m 48s) [production]
10:54 <jynus> depool labsdb1010 for maintenance [production]
10:47 <arturo> merging multiple commits to labs/private.git. We now require `puppet-merge --labsprivate` and people may not be yet aware of that [production]
09:28 <jynus> stop and upgrade db2073 [production]
09:11 <jynus> stop and upgrade db2095 (s2, s4, s6, s7) [production]
08:33 <jynus> upgrade and restart db2065 [production]
08:16 <jynus> depool labsdb1011 for maintenance [production]
07:54 <jynus@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1099 with low weight (duration: 00m 49s) [production]
07:43 <_joe_> restarting php-fpm on canaries [production]
07:24 <_joe_> repooling mw1348 [production]
07:24 <jynus> upgrade and restart labsdb1009 [production]
07:15 <_joe_> draining mw1348 from traffic [production]
07:14 <jynus> depool labsdb1009 for maintenance [production]
06:55 <jynus> upgrade and restart db2058 [production]
06:33 <_joe_> repooled mw1348 [production]
06:21 <jijiki> depool mw1348 [production]
06:16 <_joe_> restarting php-fpm on mw1348 [production]
00:08 <jgleeson> Updating civicrm from bb4acf3d8a to e028bfcd63 [production]
2019-05-30 §
23:36 <XioNoX> remove BGP sessions to starhub on cr4-ulsfo (left the IXP) [production]
22:59 <marxarelli> deleted 95 docker images from contint1001, freeing ~ 8G on / cc: T219850 [production]
22:59 <XioNoX> add terms to drop specific icmp frag packets from cr1/2-eqiad - T224186 [production]
22:53 <marxarelli> deleting stale docker images from contint1001, cc: T207707 T219850 [production]
22:25 <mutante> phab2001 / phab1003 - why is 'git status' in /srv/phab/phabricator unclean with lots of file deletions but also not identical [production]