1951-2000 of 10000 results (49ms)
2018-10-29 §
09:07 <addshore@deploy1001> Finished scap: sync with no changes (duration: 14m 39s) [production]
08:56 <elukey> restart yarn on an-master100[1,2] to pick up new zookeeper timeout settings (10s -> 20s) - T206943 [production]
08:52 <addshore@deploy1001> Started scap: sync with no changes [production]
08:50 <addshore@deploy1001> sync aborted: (no justification provided) (duration: 00m 03s) [production]
08:50 <addshore@deploy1001> Started scap: (no justification provided) [production]
08:49 <addshore@deploy1001> sync-l10n aborted: (no justification provided) (duration: 01m 19s) [production]
08:42 <gilles@deploy1001> Synchronized wmf-config/InitialiseSettings.php: T208088 Enable performance perception survey shuffling (duration: 00m 47s) [production]
08:38 <gilles@deploy1001> Synchronized php-1.33.0-wmf.1/extensions/QuickSurveys: T208088 Add ability to shuffle answers display order (duration: 01m 51s) [production]
08:35 <gilles@deploy1001> sync aborted: T208088 Enable performance QuickSurvey shuffling (duration: 00m 00s) [production]
08:35 <gilles@deploy1001> Started scap: T208088 Enable performance QuickSurvey shuffling [production]
08:35 <gilles@deploy1001> sync aborted: T208088 Enable performance QuickSurvey shuffling (duration: 06m 30s) [production]
08:29 <gilles@deploy1001> Started scap: T208088 Enable performance QuickSurvey shuffling [production]
08:07 <godog> reformat ms-be1042 xfs filesystems - T199198 [production]
08:00 <gilles> Deploying time-sensitive backport to QuickSurveys [production]
02:08 <onimisionipe> repooling wdqs1003. It has caught up with others [production]
2018-10-28 §
23:36 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.1/extensions/Graph: T184128 - I02da92de33 (duration: 00m 58s) [production]
19:17 <onimisionipe> depooling wdqs1003 to catch up on lag [production]
17:30 <elukey> restart yarn resource manager on an-master1002 to force failover to an-master1001 - T206943 [production]
16:36 <onimisionipe> repooling wdqs1003 - it didn't really catch up with others, but lag time on others are beginning to up. [production]
13:38 <onimisionipe> depooling wdqs1003 again to catch up with others [production]
02:16 <onimisionipe> repooling wdqs1003 - it has catch up with others [production]
00:03 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.1/extensions/CirrusSearch: T206967 - Ia23d19cf1e6 (duration: 01m 02s) [production]
2018-10-27 §
22:22 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.1/resources/src: T208093- I25012a2c6f (duration: 00m 58s) [production]
21:24 <banyek> resetting power on db1117 as the host is DOWN and the serial console shows nothing [production]
20:56 <onimisionipe> depooling wdqs1003 to catch up with others [production]
16:18 <addshore@deploy1001> Synchronized wmf-config/Wikibase.php: Wikibase, fix duplicate specialSiteLinkGroups key T208124 (duration: 00m 54s) [production]
15:57 <addshore@deploy1001> Synchronized wmf-config/Wikibase.php: Wikibase, make sure specialSiteLinkGroups has wikidata group (duration: 00m 54s) [production]
12:32 <Amir1> Deployed patch for T207576 [production]
12:29 <banyek> resuming replication on s1@dbstore2002 as table compression is finished (T204930) [production]
09:17 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: BETA ONLY (4x patches) (duration: 00m 55s) [production]
09:09 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: Remove wgArticlePlaceholderSearchIntegrationBackend BETA override (duration: 01m 00s) [production]
08:34 <addshore@deploy1001> Synchronized wmf-config/Wikibase.php: Wikibase, Set siteLinkGroups settings on all wikis again T208048 T208077 T208074 (duration: 00m 54s) [production]
08:24 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: BETA ONLY T208043 (duration: 01m 06s) [production]
03:41 <SMalyshev> depool wdqs1003 again to let it catch up some more [production]
02:35 <smalyshev@deploy1001> Finished deploy [wdqs/wdqs@7eeede7]: Re-deploy Updater to deal with performance issues (duration: 00m 38s) [production]
02:34 <smalyshev@deploy1001> Started deploy [wdqs/wdqs@7eeede7]: Re-deploy Updater to deal with performance issues [production]
02:34 <smalyshev@deploy1001> Finished deploy [wdqs/wdqs@e9392f4]: Re-deploy Updater to deal with performance issues (duration: 00m 05s) [production]
02:33 <smalyshev@deploy1001> Started deploy [wdqs/wdqs@e9392f4]: Re-deploy Updater to deal with performance issues [production]
00:00 <mutante> icinga1001 - using wmf-auto-reimage to reinstall gets stuck at initial puppet run after reboot - Still waiting for Puppet after 105.0 minutes - aborting on cumin, loggin in directly and manually running puppet (T202782 T208100) [production]
2018-10-26 §
22:54 <mutante> sodium - attempted to replace broken disk for RAID - did not go well [production]
21:38 <ejegg> updated fundraising CiviCRM from 97506677e8 to 65130ef3dd [production]
21:36 <aaron@deploy1001> Synchronized php-1.33.0-wmf.1/includes: 86c0b56b0d1bf66073fafb9bc00bafb87d2e3b9c (duration: 01m 14s) [production]
21:34 <aaron@deploy1001> Synchronized php-1.33.0-wmf.1/autoload.php: 86c0b56b0d1bf66073fafb9bc00bafb87d2e3b9c (duration: 00m 52s) [production]
21:33 <aaron@deploy1001> Synchronized php-1.33.0-wmf.1/tests: 86c0b56b0d1bf66073fafb9bc00bafb87d2e3b9c (duration: 01m 08s) [production]
20:03 <mutante> icinga1001 - disabled puppet, changed: check_result_reaper_frequency=2 ; max_check_result_reaper_time=10 to test if it lowers latency (T208066) [production]
19:40 <chasemp> remove 2fa for charlottepotero and cwd users in phab (so they can readd) [production]
19:09 <SMalyshev> repooled wdqs1003 - looks like it caught up now [production]
17:18 <SMalyshev> depool wdqs1003 again to let it catch up some more [production]
16:10 <ejegg> updated payments-wiki to 34506ce636 [production]
15:32 <elukey> rolling restart of all prometheus-mcrouter-exporters on app/api servers - metrics not reported after the last mcrouter restart [production]