3501-3550 of 10000 results (33ms)
2016-03-01 ยง
23:17 <mutante> maps-test2001 - could not find dependency for postgres class is NOT related to my recent change. icinga crit since a long time [production]
22:34 <mutante> re-enabled puppet runs on all mw* servers, mediawiki roles now in modules/role/manifests/mediawiki/ [production]
22:27 <mutante> temp. disabling puppet runs on mw appservers to be extra safe during mediawiki module change [production]
21:29 <gehel> elastic1001.eqiad.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
20:29 <demon@tin> Finished scap: group0 to wmf.15 (duration: 31m 24s) [production]
19:58 <demon@tin> Started scap: group0 to wmf.15 [production]
19:19 <jynus> testing heartbeat in m5 (db1009, db2030) [production]
19:14 <demon@tin> scap aborted: testwikis to wmf.15 and rebuild l10n (duration: 01m 19s) [production]
19:14 <chasemp> clean out /var/log/atop and /var/log/account on iridium [production]
19:13 <demon@tin> Started scap: testwikis to wmf.15 and rebuild l10n [production]
18:53 <mutante> iridium - gzip /var/log/atop/atop_20160* [production]
18:51 <mutante> iridium: apt-get clean for some more disk space [production]
18:49 <subbu> finished deploying parsoid sha 1f7ed5d0 [production]
18:44 <subbu> synced parsoid code; restarted parsoid on wtp1002 as a canary [production]
18:41 <subbu> starting parsoid deploy [production]
17:52 <gehel> elastic2024.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
17:18 <mobrovac> restbase rolling-restart restbase for https://gerrit.wikimedia.org/r/#/c/273974/ [production]
17:05 <thcipriani@tin> Synchronized wmf-config/ProductionServices.php: SWAT: Add kafka1012.eqiad.wmnet back to the media-wiki config [[gerrit:273488]] (duration: 00m 39s) [production]
16:44 <thcipriani@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Enable rollbacker and suppressredirect group at cewiki [[gerrit:273828]] (duration: 00m 41s) [production]
16:40 <gehel> elastic2023.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
16:39 <hashar> restarting Jenkins [production]
16:37 <thcipriani@tin> Synchronized wmf-config/InitialiseSettings.php: SWAT: Correct one Domain at $wgCopyUploadsDomains [[gerrit:273776]] (duration: 00m 40s) [production]
16:33 <hashar> Bunch of Jenkins job got stall because I have killed threads in Jenkins to unblock integration-slave-trusty-1003 :-( Jenkins / Zuul is catching up. [production]
16:32 <thcipriani@tin> Synchronized wmf-config/filebackend-production.php: SWAT: Configure redis LockManager in both DCs, use the master everywhere. PART II [[gerrit:266514]] (duration: 00m 46s) [production]
16:30 <thcipriani@tin> Synchronized wmf-config/ProductionServices.php: SWAT: Configure redis LockManager in both DCs, use the master everywhere. PART I [[gerrit:266514]] (duration: 00m 40s) [production]
16:24 <thcipriani@tin> Synchronized wmf-config/redis.php: SWAT: Use wmfMasterDatacenter for picking the master redis config [[gerrit:266513]] (duration: 00m 39s) [production]
16:18 <thcipriani@tin> Synchronized wmf-config/CirrusSearch-production.php: SWAT: Add references to wmfServices for Cirrusearch [[gerrit:266512]] (duration: 00m 56s) [production]
15:45 <gehel> elastic2022.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
14:55 <elukey@tin> Synchronized wmf-config/filebackend-production.php: Add mc1003 to the lock managers pool after maintenance (duration: 00m 40s) [production]
14:47 <elukey> mc1003.eqiad added back to the redis/memcached pool after maintenance. [production]
14:34 <gehel> elastic2021.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
13:53 <gehel> elastic2020.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
13:09 <elukey@tin> Synchronized wmf-config/filebackend-production.php: Remove mc1003 from the lock managers pool for maintenance (duration: 00m 40s) [production]
12:48 <elukey> removed mc1003 from redis/memcached pools for maintenance [production]
12:37 <gehel> elastic2019.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
12:16 <moritzm> shutting down curium (decomissioned) [production]
11:53 <moritzm> shutting down berkelium (decomissioned) [production]
11:44 <jynus> shutting down pc100[123] [production]
11:43 <elukey@tin> Synchronized wmf-config/filebackend-production.php: Add mc1002 back to the lock managers pool after maintenance (duration: 01m 01s) [production]
11:40 <gehel> elastic2018.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
11:20 <elukey> mc1002.eqiad added back to the memcached/redis pools after maintenance [production]
10:53 <jynus> disabling puppet and following steps to decommission pc100[123] [production]
10:42 <gehel> elastic2017.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
10:06 <elukey> Amended previous log - Remove mc1002 from the lock managers after maintenance [production]
10:05 <elukey@tin> Synchronized wmf-config/filebackend-production.php: Add mc1002 from the lock managers after maintenance (duration: 00m 56s) [production]
09:46 <gehel> elastic2016.codfw.wmnet: upgrading to 1.7.5, shipping logs to logstash (T122697, T109101) [production]
09:32 <elukey> removed mc1002 from the redis/memcached pools for maintenance [production]
07:11 <ebernhardson> upgrade elastic2015.codfw.wmnet to elasticsearch 1.7.5 [production]
06:10 <ebernhardson> upgrade elastic2014.codfw.wmnet to elasticsearch 1.7.5 [production]
04:55 <ebernhardson> upgrade elastic2013.codfw.wmnet to elasticsearch 1.7.5 [production]