1-50 of 10000 results (43ms)
2017-06-10 §
11:54 <andrewbogott> cleared leaked instances out of the nova fullstack test. Six were up and running and reachable, one had a network failure. [production]
10:19 <TimStarling> on terbium: running purgeParserCache.php prior to cron job due to observed disk space usage increase [production]
10:00 <marostegui> Purge binary logs on pc1006-pc2006 [production]
09:58 <marostegui> Purge binary logs on pc1004-pc2004 and pc1005-pc2005 [production]
02:22 <l10nupdate@tin> ResourceLoader cache refresh completed at Sat Jun 10 02:22:22 UTC 2017 (duration 6m 13s) [production]
02:16 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.4) (duration: 05m 33s) [production]
2017-06-09 §
21:18 <mobrovac@tin> Finished deploy [restbase/deploy@4e5cb35]: (no justification provided) (duration: 01m 40s) [production]
21:17 <mobrovac@tin> Started deploy [restbase/deploy@4e5cb35]: (no justification provided) [production]
21:07 <mobrovac@tin> Finished deploy [restbase/deploy@4e5cb35]: Ensure the extract field is always present in the summary response - T167045 (take #2) (duration: 05m 23s) [production]
21:02 <mobrovac@tin> Started deploy [restbase/deploy@4e5cb35]: Ensure the extract field is always present in the summary response - T167045 (take #2) [production]
21:01 <mobrovac@tin> Finished deploy [restbase/deploy@4e5cb35]: Ensure the extract field is always present in the summary response - T167045 (duration: 04m 57s) [production]
20:56 <mobrovac@tin> Started deploy [restbase/deploy@4e5cb35]: Ensure the extract field is always present in the summary response - T167045 [production]
20:54 <mobrovac@tin> Finished deploy [restbase/deploy@4e5cb35] (staging): Ensure the extract field is always present in the summary response (duration: 03m 39s) [production]
20:50 <mobrovac@tin> Started deploy [restbase/deploy@4e5cb35] (staging): Ensure the extract field is always present in the summary response [production]
20:12 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/CirrusSearch/includes/Job/DeleteArchive.php: Really fix it this time (duration: 00m 43s) [production]
19:49 <mutante> fermium: $ sudo /usr/local/sbin/disable_list wikino-bureaucrats (T166848) [production]
19:46 <RainbowSprinkles> mw1299: running scap pull, maybe out of date? [production]
18:12 <gehel> retry allocation of failed shards on elasticsearch eqiad [production]
15:47 <_joe_> installed python-service-checker 0.1.3 on einsteinium,tegmen T167048 [production]
15:44 <_joe_> uploaded service-checker 0.1.3 [production]
15:11 <_joe_> upgraded python-service-checker to 0.1.2 on tegmen,einsteinium [production]
13:18 <godog> upgrade thumbor to 0.1.40 - T167462 [production]
12:36 <gehel> reducing high watermark on elasticsearch eqiad to rebalance shards [production]
07:51 <elukey> run megacli -LDSetProp -Direct -LALL -aALL on analytics[1058-1068] - T166140 [production]
07:40 <moritzm> upgrade app servers in codfw running HHVM 3.18 to +wmf5 [production]
07:26 <elukey> run megacli -LDSetProp ADRA -LALL -aALL on analytics[1058-1068] - T166140 [production]
07:15 <elukey> deleted /etc/logrotate.d/nova-manage from labtestvirt2003 to reduce cronspam (same solution used in T132422#2679434) [production]
06:58 <moritzm> updating mw117* to HHVM 3.18+wmf5 [production]
06:41 <moritzm> updating mw1161 to HHVM 3.18 [production]
05:57 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1056 - T166206 (duration: 00m 41s) [production]
05:51 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Repool db1074 - T166205 (duration: 00m 42s) [production]
02:25 <l10nupdate@tin> ResourceLoader cache refresh completed at Fri Jun 9 02:25:29 UTC 2017 (duration 6m 27s) [production]
02:19 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.4) (duration: 06m 04s) [production]
00:36 <ejegg> disabled banner impressions loader [production]
00:15 <mutante> mw1275 depooled (T124956) [production]
00:08 <ejegg> updated CiviCRM from 5a83ee18da383b8a2e4381c82307c7a50d4b973b to dfc26f058327bb6248af44ad39a9864b6c3a6581 [production]
00:01 <mutante> seeing "php: Lost parent, LightProcess exiting" in syslog on mw1275 today (T124956) [production]
2017-06-08 §
23:48 <mutante> mw1275 - restarted hhvm (php: Lost parent, LightProcess exiting in syslog) [production]
23:37 <demon@tin> rebuilt wikiversions.php and synchronized wikiversions files: remaining wikis to wmf.4 [production]
23:16 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/CirrusSearch/includes/Job/DeleteArchive.php: Fix array access bug (duration: 00m 43s) [production]
23:15 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/GeoData/includes/Searcher.php: Temp hax to point GeoData at codfw DC (duration: 00m 43s) [production]
22:56 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/RevisionSlider/src/RevisionSliderHooks.php: Re-syncing with permanent committed fix (duration: 00m 44s) [production]
22:36 <ejegg> updated civicrm from c70ae650bdf2e670d1dcb77f471b3ac9c5fb05f9 to 5a83ee18da383b8a2e4381c82307c7a50d4b973b [production]
22:29 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/RevisionSlider/src/RevisionSliderHooks.php: Livehack/test (duration: 00m 44s) [production]
22:17 <demon@tin> Synchronized php-1.30.0-wmf.4/extensions/MobileFrontend/includes/specials/SpecialMobileDiff.php: (no justification provided) (duration: 00m 44s) [production]
22:15 <mobrovac@tin> Finished deploy [changeprop/deploy@836b070]: Rate limiting, attempt #2 (duration: 01m 23s) [production]
22:13 <mobrovac@tin> Started deploy [changeprop/deploy@836b070]: Rate limiting, attempt #2 [production]
21:56 <mobrovac@tin> Finished deploy [changeprop/deploy@dc1948f]: (no justification provided) (duration: 01m 39s) [production]
21:54 <mobrovac@tin> Started deploy [changeprop/deploy@dc1948f]: (no justification provided) [production]
21:54 <mobrovac@tin> Finished deploy [changeprop/deploy@56f7511]: (no justification provided) (duration: 01m 32s) [production]