1151-1200 of 10000 results (25ms)
2014-05-06 §
02:28 <LocalisationUpdate> completed (1.24wmf2) at 2014-05-06 02:27:21+00:00 [production]
00:02 <ori> synchronized wmf-config/CommonSettings.php 'Update wgFlowCacheVersion to 4.2' [production]
2014-05-05 §
23:57 <ori> updated /a/common to {{Gerrit|Id1f2e0acf}}: Drop wgFlowCacheKey from CommonSettings.php [production]
23:54 <ori> Finished scap: SWAT deploy for VisualEditor and Flow cherry-picks (duration: 09m 55s) [production]
23:44 <ori> Started scap: SWAT deploy for VisualEditor and Flow cherry-picks [production]
23:26 <ori> synchronized php-1.24wmf3/extensions/EventLogging 'Update EventLogging for Id23b37fbe for SWAT.' [production]
23:23 <ori> synchronized php-1.24wmf2/extensions/EventLogging 'Update EventLogging for Id23b37fbe for SWAT.' [production]
21:07 <jgage> trying on analytics1022: https://wikitech.wikimedia.org/wiki/Analytics/Kraken/Kafka/Administration#Recovering_a_laggy_broker_replica [production]
20:58 <RobH> ssl1001-1003 now have updated unified cert in service [production]
20:58 <jgage> both kafka brokers back in service [production]
20:54 <RobH> cp4001-4020 unified cert and nginx service reloaded, back in service [production]
20:50 <RobH> ssl1006 and ssl1009 are responsive to nginx and back in service [production]
20:43 <RobH> pybal [production]
20:43 <RobH> ssl1009 was refusing connections both before and after my ssl cert update. ssl1006 is presently refusing connections post update. they are set to disabled in pubal [production]
20:40 <RobH> ssl1008 back into service, ssl1009 already depooled [production]
20:38 <jgage> forced kafka broker reelection [production]
20:34 <RobH> ssl1007 going back into service, ssl1008 depooling [production]
20:25 <RobH> depooled ssl1006/7 for update [production]
20:25 <RobH> ssl1004/5 returned to service (and puppet agents enabled) [production]
20:21 <RobH> puppet agent has been re-enabled on ssl1001-1003 [production]
20:20 <RobH> ssl1004/5 disabled for update [production]
20:18 <RobH> putting ssl1002/3 back into service [production]
20:15 <subbu> deployed parsoid f2f1f1d7 (with deploy sha 71072f8a) [production]
19:58 <RobH> ssl1001 back in service, ssl1002-1003 set to disabled in pybal [production]
19:18 <RobH> depooling ssl1001 to test new certs live on system [production]
19:09 <RobH> disabled puppet on cp40XX, ssl10XX, and ssl30XX [production]
19:08 <bblack> synchronized wmf-config/squid.php 'REVERT: Update wgSquidServersNoPurge to use whole subnets for XFF checking' [production]
19:07 <bblack> updated /a/common to {{Gerrit|Iaf4d57d54}}: Revert "Use whole subnets in squid.php list for XFF acceptance" [production]
19:03 <bblack> synchronized wmf-config/squid.php 'Update wgSquidServersNoPurge to use whole subnets for XFF checking' [production]
19:01 <bblack> updated /a/common to {{Gerrit|I5a2d86ef0}}: Use whole subnets in squid.php list for XFF acceptance [production]
17:05 <aaron> synchronized wmf-config/CommonSettings.php 'Revert "Increased htmlCacheUpdate throttle"' [production]
16:00 <anomie> synchronized php-1.24wmf3/extensions/MobileFrontend/ 'SWAT: Backport change 131237 to 1.24wmf3 to fix bug in MobileFrontend' [production]
15:59 <anomie> synchronized php-1.24wmf2/extensions/MobileFrontend/ 'SWAT: Backport change 131237 to 1.24wmf2 to fix bug in MobileFrontend' [production]
15:49 <anomie> synchronized php-1.24wmf2/includes/specials/SpecialAllmessages.php 'SWAT: Backport change 131041 to 1.24wmf2 to fix bug in Special:AllMessages' [production]
15:37 <anomie> synchronized php-1.24wmf2/includes/specials/SpecialAllmessages.php 'SWAT: Backport change 131041 to 1.24wmf2 to fix bug in Special:AllMessages' [production]
15:24 <anomie> synchronized php-1.24wmf3/includes/specials/SpecialAllmessages.php 'SWAT: Backport change 131041 to 1.24wmf3 to fix bug in Special:AllMessages' [production]
15:12 <anomie> synchronized php-1.24wmf3/includes/api/ApiLogin.php 'SWAT: Backport change 131056 to 1.24wmf3 to fix bug 64727' [production]
15:10 <anomie> synchronized php-1.24wmf2/includes/api/ApiLogin.php 'SWAT: Backport change 131056 to 1.24wmf2 to fix bug 64727' [production]
12:45 <akosiaris> removing various sdtpa devices from LibreNMS [production]
03:12 <LocalisationUpdate> ResourceLoader cache refresh completed at Mon May 5 03:11:15 UTC 2014 (duration 11m 14s) [production]
02:32 <^demon|away> [gitb]lit's wonkiness but they're certainly not helping matters. [production]
02:26 <LocalisationUpdate> completed (1.24wmf3) at 2014-05-05 02:25:34+00:00 [production]
02:14 <LocalisationUpdate> completed (1.24wmf2) at 2014-05-05 02:13:18+00:00 [production]
2014-05-04 §
20:57 <aaron> synchronized php-1.24wmf3/thumb.php 'c5ebd2aefce9e3fc5b994053078754021176f411' [production]
20:40 <aaron> synchronized php-1.24wmf3/thumb.php '6c230cbbc6ffa4d8909e88961ebf75755cf9c9d9' [production]
19:25 <ori> updated /a/common to {{Gerrit|I2916ef3bd}}: labs: stream recent changes to redis [production]
09:59 <_joe_> restarted gitblit, stuck on GC as usual. [production]
08:40 <_joe_> restarted apache on tungsten as it was stuck communicating with uwsgi [production]
03:09 <LocalisationUpdate> ResourceLoader cache refresh completed at Sun May 4 03:08:41 UTC 2014 (duration 8m 40s) [production]
02:26 <LocalisationUpdate> completed (1.24wmf3) at 2014-05-04 02:25:06+00:00 [production]