201-250 of 10000 results (69ms)
2018-10-23 ยง
23:59 <legoktm> deploying https://gerrit.wikimedia.org/r/468171 [releng]
23:31 <twentyafterfour@deploy1001> Started scap: syncing 1.33.0-wmf.1 refs T206655 [production]
23:30 <twentyafterfour@deploy1001> Synchronized php-1.32.0-wmf.26/includes/export/WikiExporter.php: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/core/+/469319/ refs T207628 (duration: 01m 39s) [production]
22:16 <eileen> civicrm revision changed from bde28d4453 to 1c0a1b2406, config revision is c0a8be03a1 [production]
22:14 <twentyafterfour> scap prep 1.33.0-wmf.1 [production]
21:50 <paladox> upgrading gerrit.git.wmflabs.org (gerrit-test3) to 2.16rc0 (basically 2.16rc1) [git]
21:47 <mutante> icinga1001 - replacing check_ping with check_fping as the standard host check command, for faster host checks (another tip from Nagios Tuning guide, still manual testing) (T202782) [production]
21:30 <mutante> icinga1001 - changing check_result_reaper_frequecy from 10 to 3, trying to lower average check latency. "allow faster check result processing -> requires more CPU" (T202782) [production]
20:24 <mutante> switched to local puppetmaster in project puppet, so paladox can cherry-pick his suggested skin change for planet and we don't disable puppet for that [planet]
19:54 <gtirloni> turned shinken-01 down [shinken]
19:45 <gtirloni> pointed shinken.wmflabs.org to shinken-02 [shinken]
19:43 <gtirloni> deleted shinken-jessie.wmflabs.org web proxy [shinken]
19:30 <twentyafterfour@deploy1001> Synchronized php-1.32.0-wmf.26/skins/MinervaNeue/resources/skins.minerva.scripts/pageIssuesLogger.js: sync https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ refs T207423 (duration: 00m 48s) [production]
19:27 <twentyafterfour> deploying https://gerrit.wikimedia.org/r/#/c/mediawiki/skins/MinervaNeue/+/469244/ [production]
19:22 <bawolff> deploy patch T207778 [production]
18:37 <gtirloni> disabled puppet on shinken-01 during migration to new instance [shinken]
18:17 <mutante> icinga - performance/latency comparison - https://icinga.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 vs https://icinga-stretch.wikimedia.org/cgi-bin/icinga/extinfo.cgi?type=4 (T202782) [production]
18:13 <mutante> icinga1001 - manually set max_concurrent_checks to 0 (unlimited), restart icinga, keep puppet disabled, for testing (it ran into the limit of 10000 all the time, causing lots of logging, and the CPU power is actually slightly lower than on einsteinium (T202782) refs: Nagios Tuning, point 7 https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/3/en/tuning.html [production]
17:20 <jforrester@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: BETA: Set wmgWikibaseCachePrefix for commonswiki I0badd355723 (duration: 00m 46s) [production]
17:18 <ejegg> updated standalone SmashPig deploy from 2292111bda to b638ca02bc [production]
17:15 <jforrester@deploy1001> Synchronized wmf-config/CommonSettings.php: For WBMI, intentionally rather than implicitly install Wikibase I38574e670 (duration: 00m 47s) [production]
17:13 <mutante> icinga1001 rm /var/log/user.log.1 - was 14G and using 25% of the / partition and server out of disk :/ [production]
17:06 <ejegg> rolled SmashPig back to 2292111bda [production]
17:03 <ejegg> updated standalone SmashPig deployment from 2292111bda to 18da9727d8 [production]
16:25 <ottomata> altering topic eventlogging_ReadingDepth to increase partitions from 1 to 12 [analytics]
16:20 <volans> restarted pdfrender on scb1004 [production]
14:47 <herron> added confluent-kafka-2.11 1.1.0-1 package to jessie-wikimedia/thirdparty T206454 [production]
14:34 <anomie@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Setting comment table migration stage to write-new/read-both on group 1 (T166733) (duration: 00m 46s) [production]
14:22 <anomie@deploy1001> Synchronized php-1.32.0-wmf.26/includes/filerepo/file/LocalFile.php: Backport for T207419 (duration: 00m 47s) [production]
14:02 <gehel> repooling / banning elastics1031 - T207724 [production]
14:01 <moritzm> installing spice security updates [production]
14:00 <ema> upload trafficserver 8.0.0-1wm1 to stretch-wikimedia/main T204232 [production]
13:48 <gehel> depooling / banning elastics1031 - T207724 [production]
13:43 <gehel> depooling / banning elastics1029 - T207724 [production]
13:35 <gehel> rolling restart of blazegraph for change to blazegraph home dir [production]
13:23 <gtirloni> Added gtirloni to the project [project-proxy]
13:22 <gehel> depooling / banning elastics1018 - T207724 [production]
12:29 <gehel> depooling / banning elastics1028 and 1030 - T207724 [production]
11:49 <sebastian-wmse> Deploy latest from Git master: 13787aa (T192683), fc44dd4, 15214e8 (T192683) [wikispeech]
11:23 <zeljkof> EU SWAT finished [production]
11:20 <zfilipin@deploy1001> Synchronized wmf-config/throttle.php: SWAT: [[gerrit:469168|New throttle rule for Wikipedia in Ort (T207714)]] (duration: 00m 46s) [production]
11:11 <zfilipin@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:469180|Enable RCPatrol for srwikiquote (T207732)]] (duration: 00m 47s) [production]
10:13 <ema> upload libc++ 6.0.1 to stretch-wikimedia/main T204232 [production]
09:42 <jynus> stopping db1087 to fix db1124 [production]
09:31 <gehel> depooling / banning elastics1017 and 1022 - T207724 [production]
09:13 <godog> roll-restart thumbor to send statsd traffic through statsd_exporter - T205870 [production]
08:08 <godog> update hp firmware to 6.60 on ms-be2017 - T141756 [production]
07:14 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1100 - T184805 (duration: 00m 48s) [production]
06:50 <elukey> powercycle ms-be2017 (frozen since ~8hrs ago) [production]
06:42 <elukey> restart yarn and hdfs daemon on analytics1068 to pick up correct config (the host was down since before we swapped the Hadoop masters due to hw failure) [production]