2018-10-24
§
|
02:34 |
<mutante> |
powercycled wdqs1009 - by request |
[production] |
02:24 |
<onimisionipe@deploy1001> |
Finished deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue (duration: 00m 03s) |
[production] |
02:24 |
<onimisionipe@deploy1001> |
Started deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue |
[production] |
02:12 |
<onimisionipe@deploy1001> |
Finished deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue (duration: 00m 23s) |
[production] |
02:12 |
<onimisionipe@deploy1001> |
Started deploy [wdqs/wdqs@d4692ea]: Reverting update on wdqs1003 to fix wdqs-updater issue |
[production] |
01:56 |
<tstarling@deploy1001> |
Synchronized php-1.33.0-wmf.1/includes/page/WikiPage.php: T207530 (duration: 00m 53s) |
[production] |
01:46 |
<tstarling@deploy1001> |
Synchronized php-1.32.0-wmf.26/includes/page/WikiPage.php: fix deletion performance regression T207530 (duration: 00m 55s) |
[production] |
01:37 |
<bawolff> |
deployed T207750 |
[production] |
01:24 |
<mutante> |
wdqs2005 - powercycled, wasnt reachable via SSH and also couldn't login on mgmt, mgmt full of jave exceptions from wdqs-updater |
[production] |
00:28 |
<twentyafterfour@deploy1001> |
rebuilt and synchronized wikiversions files: group0 wikis to 1.33.0-wmf.1 refs T206655 |
[production] |
00:26 |
<twentyafterfour> |
finished with mediawiki train for group0 refs T206655 |
[production] |
00:08 |
<twentyafterfour@deploy1001> |
Finished scap: syncing 1.33.0-wmf.1 refs T206655 (duration: 36m 58s) |
[production] |
2018-10-23
§
|
23:59 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/468171 |
[releng] |
23:31 |
<twentyafterfour@deploy1001> |
Started scap: syncing 1.33.0-wmf.1 refs T206655 |
[production] |
23:30 |
<twentyafterfour@deploy1001> |
Synchronized php-1.32.0-wmf.26/includes/export/WikiExporter.php: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/469319/ refs T207628 (duration: 01m 39s) |
[production] |
22:16 |
<eileen> |
civicrm revision changed from bde28d4453 to 1c0a1b2406, config revision is c0a8be03a1 |
[production] |
22:14 |
<twentyafterfour> |
scap prep 1.33.0-wmf.1 |
[production] |
21:50 |
<paladox> |
upgrading gerrit.git.wmflabs.org (gerrit-test3) to 2.16rc0 (basically 2.16rc1) |
[git] |
21:47 |
<mutante> |
icinga1001 - replacing check_ping with check_fping as the standard host check command, for faster host checks (another tip from Nagios Tuning guide, still manual testing) (T202782) |
[production] |
21:30 |
<mutante> |
icinga1001 - changing check_result_reaper_frequecy from 10 to 3, trying to lower average check latency. "allow faster check result processing -> requires more CPU" (T202782) |
[production] |
20:24 |
<mutante> |
switched to local puppetmaster in project puppet, so paladox can cherry-pick his suggested skin change for planet and we don't disable puppet for that |
[planet] |
19:54 |
<gtirloni> |
turned shinken-01 down |
[shinken] |
19:45 |
<gtirloni> |
pointed shinken.wmflabs.org to shinken-02 |
[shinken] |
19:43 |
<gtirloni> |
deleted shinken-jessie.wmflabs.org web proxy |
[shinken] |
19:30 |
<twentyafterfour@deploy1001> |
Synchronized php-1.32.0-wmf.26/skins/MinervaNeue/resources/skins.minerva.scripts/pageIssuesLogger.js: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ refs T207423 (duration: 00m 48s) |
[production] |
19:27 |
<twentyafterfour> |
deploying https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ |
[production] |
19:22 |
<bawolff> |
deploy patch T207778 |
[production] |
18:37 |
<gtirloni> |
disabled puppet on shinken-01 during migration to new instance |
[shinken] |
18:17 |
<mutante> |
icinga - performance/latency comparison - https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 vs https://icinga-stretch.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 (T202782) |
[production] |
18:13 |
<mutante> |
icinga1001 - manually set max_concurrent_checks to 0 (unlimited), restart icinga, keep puppet disabled, for testing (it ran into the limit of 10000 all the time, causing lots of logging, and the CPU power is actually slightly lower than on einsteinium (T202782) refs: Nagios Tuning, point 7 https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/tuning.html |
[production] |
17:20 |
<jforrester@deploy1001> |
Synchronized wmf-config/InitialiseSettings-labs.php: BETA: Set wmgWikibaseCachePrefix for commonswiki I0badd355723 (duration: 00m 46s) |
[production] |
17:18 |
<ejegg> |
updated standalone SmashPig deploy from 2292111bda to b638ca02bc |
[production] |
17:15 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: For WBMI, intentionally rather than implicitly install Wikibase I38574e670 (duration: 00m 47s) |
[production] |
17:13 |
<mutante> |
icinga1001 rm /var/log/user.log.1 - was 14G and using 25% of the / partition and server out of disk :/ |
[production] |
17:06 |
<ejegg> |
rolled SmashPig back to 2292111bda |
[production] |
17:03 |
<ejegg> |
updated standalone SmashPig deployment from 2292111bda to 18da9727d8 |
[production] |
16:25 |
<ottomata> |
altering topic eventlogging_ReadingDepth to increase partitions from 1 to 12 |
[analytics] |
16:20 |
<volans> |
restarted pdfrender on scb1004 |
[production] |
14:47 |
<herron> |
added confluent-kafka-2.11 1.1.0-1 package to jessie-wikimedia/thirdparty T206454 |
[production] |
14:34 |
<anomie@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Setting comment table migration stage to write-new/read-both on group 1 (T166733) (duration: 00m 46s) |
[production] |
14:22 |
<anomie@deploy1001> |
Synchronized php-1.32.0-wmf.26/includes/filerepo/file/LocalFile.php: Backport for T207419 (duration: 00m 47s) |
[production] |
14:02 |
<gehel> |
repooling / banning elastics1031 - T207724 |
[production] |
14:01 |
<moritzm> |
installing spice security updates |
[production] |
14:00 |
<ema> |
upload trafficserver 8.0.0-1wm1 to stretch-wikimedia/main T204232 |
[production] |
13:48 |
<gehel> |
depooling / banning elastics1031 - T207724 |
[production] |
13:43 |
<gehel> |
depooling / banning elastics1029 - T207724 |
[production] |
13:35 |
<gehel> |
rolling restart of blazegraph for change to blazegraph home dir |
[production] |
13:23 |
<gtirloni> |
Added gtirloni to the project |
[project-proxy] |
13:22 |
<gehel> |
depooling / banning elastics1018 - T207724 |
[production] |
12:29 |
<gehel> |
depooling / banning elastics1028 and 1030 - T207724 |
[production] |