551-600 of 10000 results (58ms)
2018-10-24 §
09:20 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly repool db1092 and db1087 (duration: 01m 05s) [production]
08:57 <hashar> gerrit: added Lars "liw" Wirzenius to the Administrators group | T207830 [releng]
08:55 <marostegui> Stop MySQL for upgrade and reboot on db1087 [production]
08:47 <marostegui> Update MySQL on db1092 for upgrade and reboot [production]
08:03 <godog> fix aggregation to 'sum' for MediaWiki.RevisionSlider - T205416 [production]
07:33 <gehel> powercycling wdqs1010 - T207817 [production]
07:27 <Krenair> T207825 reapplied role::jobqueue_redis::master to deployment-redis prefix [releng]
07:19 <_joe_> powercycling wdqs1009 [production]
07:04 <elukey> powercycle wdqs1008 [production]
07:00 <Krenair> T207825 replacing deployment-redis3-changeprop with deployment-redis3-changeprop02 (jessie m1.small) [releng]
06:59 <elukey> powercycle wdqs1007 [production]
06:59 <Krenair> T207825 moved role::jobqueue_redis::master role from deployment-redis prefix to deployment-redis0[56] [releng]
06:55 <elukey> powercycle wdqs1006 (depool first) [production]
06:46 <elukey> powercycle wdqs1005 [production]
06:42 <SMalyshev> repooled wdqs1003 [production]
06:35 <_joe_> powercycling wdqs[2001-2002,2004-2006].codfw.wmnet, one at a time [production]
06:33 <elukey> powercycle wdqs1004 [production]
05:24 <kartik@deploy1001> Finished deploy [cxserver/deploy@80dc518]: Update cxserver to 9ad60d9 (T207445) (duration: 04m 06s) [production]
05:22 <kart_> Beta: Updated cxserver to 9ad60d9 [releng]
05:20 <kartik@deploy1001> Started deploy [cxserver/deploy@80dc518]: Update cxserver to 9ad60d9 (T207445) [production]
02:34 <mutante> powercycled wdqs1009 - by request [production]
02:24 <onimisionipe@deploy1001> Finished deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue (duration: 00m 03s) [production]
02:24 <onimisionipe@deploy1001> Started deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue [production]
02:12 <onimisionipe@deploy1001> Finished deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue (duration: 00m 23s) [production]
02:12 <onimisionipe@deploy1001> Started deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue [production]
01:56 <tstarling@deploy1001> Synchronized php-1.33.0-wmf.1/includes/page/WikiPage.php: T207530 (duration: 00m 53s) [production]
01:46 <tstarling@deploy1001> Synchronized php-1.32.0-wmf.26/includes/page/WikiPage.php: fix deletion performance regression T207530 (duration: 00m 55s) [production]
01:37 <bawolff> deployed T207750 [production]
01:24 <mutante> wdqs2005 - powercycled, wasnt reachable via SSH and also couldn't login on mgmt, mgmt full of jave exceptions from wdqs-updater [production]
00:28 <twentyafterfour@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.33.0-wmf.1 refs T206655 [production]
00:26 <twentyafterfour> finished with mediawiki train for group0 refs T206655 [production]
00:08 <twentyafterfour@deploy1001> Finished scap: syncing 1.33.0-wmf.1 refs T206655 (duration: 36m 58s) [production]
2018-10-23 §
23:59 <legoktm> deploying https://gerrit.wikimedia.org/r/468171 [releng]
23:31 <twentyafterfour@deploy1001> Started scap: syncing 1.33.0-wmf.1 refs T206655 [production]
23:30 <twentyafterfour@deploy1001> Synchronized php-1.32.0-wmf.26/includes/export/WikiExporter.php: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/469319/ refs T207628 (duration: 01m 39s) [production]
22:16 <eileen> civicrm revision changed from bde28d4453 to 1c0a1b2406, config revision is c0a8be03a1 [production]
22:14 <twentyafterfour> scap prep 1.33.0-wmf.1 [production]
21:50 <paladox> upgrading gerrit.git.wmflabs.org (gerrit-test3) to 2.16rc0 (basically 2.16rc1) [git]
21:47 <mutante> icinga1001 - replacing check_ping with check_fping as the standard host check command, for faster host checks (another tip from Nagios Tuning guide, still manual testing) (T202782) [production]
21:30 <mutante> icinga1001 - changing check_result_reaper_frequecy from 10 to 3, trying to lower average check latency. "allow faster check result processing -> requires more CPU" (T202782) [production]
20:24 <mutante> switched to local puppetmaster in project puppet, so paladox can cherry-pick his suggested skin change for planet and we don't disable puppet for that [planet]
19:54 <gtirloni> turned shinken-01 down [shinken]
19:45 <gtirloni> pointed shinken.wmflabs.org to shinken-02 [shinken]
19:43 <gtirloni> deleted shinken-jessie.wmflabs.org web proxy [shinken]
19:30 <twentyafterfour@deploy1001> Synchronized php-1.32.0-wmf.26/skins/MinervaNeue/resources/skins.minerva.scripts/pageIssuesLogger.js: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ refs T207423 (duration: 00m 48s) [production]
19:27 <twentyafterfour> deploying https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ [production]
19:22 <bawolff> deploy patch T207778 [production]
18:37 <gtirloni> disabled puppet on shinken-01 during migration to new instance [shinken]
18:17 <mutante> icinga - performance/latency comparison - https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 vs https://icinga-stretch.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 (T202782) [production]
18:13 <mutante> icinga1001 - manually set max_concurrent_checks to 0 (unlimited), restart icinga, keep puppet disabled, for testing (it ran into the limit of 10000 all the time, causing lots of logging, and the CPU power is actually slightly lower than on einsteinium (T202782) refs: Nagios Tuning, point 7 https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/tuning.html [production]