3451-3500 of 10000 results (64ms)
2019-01-22 §
05:45 <kartik@deploy1001> Finished deploy [cxserver/deploy@e0ca16b]: Update cxserver to c5ff0bf (duration: 04m 15s) [production]
05:40 <kartik@deploy1001> Started deploy [cxserver/deploy@e0ca16b]: Update cxserver to c5ff0bf [production]
02:17 <onimisionipe> restarting tilerator on maps100[1-2] [production]
00:38 <chaomodus> stat1007 nagios-srpe-server was off and alerted, restarting fixed it [production]
2019-01-21 §
22:33 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.13/extensions/TemplateData/includes/api/ApiTemplateData.php: I7647ddfc47 - T213953 (duration: 00m 47s) [production]
19:35 <jynus@deploy1001> Synchronized wmf-config/db-codfw.php: Repool db2040 (duration: 00m 45s) [production]
19:23 <jynus> mysql.py -h db1115 zarcillo -e "UPDATE masters SET instance = 'db2047' WHERE section = 's7' and dc = 'codfw'" T214264 [production]
18:55 <jynus> stop and upgrade db2040 T214264 [production]
18:52 <onimisionipe> pool maps1003 - postgresql sql lag issues has been fixed [production]
18:24 <jynus@deploy1001> Synchronized wmf-config/db-codfw.php: Depool db2040, promote db2047 to s7 master (duration: 00m 46s) [production]
17:51 <jynus> stop and apply puppet changes to db2047 T214264 [production]
17:44 <jynus> stop replication on db2040 for master switch T214264 [production]
17:16 <jynus> stop and upgrade db2054 [production]
16:03 <arturo> T214303 reimaging/renaming labtestneutron2002.codfw.wmnet (jessie) to cloudnet2002-dev.codfw.wmnet (stretch) [production]
15:58 <onimisionipe> reinitializing slave replication(postgres) on maps1003 [production]
15:52 <jynus> stop and upgrade db2061 [production]
15:19 <dcausse> closing frwikiquote_* indices on elasticsearch search-chi@codfw (T214052) [production]
15:11 <dcausse> closing frwikiquote_* indices on elasticsearch search-chi@eqiad (T214052) [production]
13:58 <marostegui> Compress enwiki on dbstore1003:3311 - T210478 [production]
12:36 <jijiki> Restarting memcached on mc1025 to apply '-R 200' - T208844 [production]
11:25 <onimisionipe> depool maps1003 to fix replication lag issues [production]
10:51 <elukey> disable puppet fleetwide to ease the merge/deploy of a puppet admin module change - T212949 [production]
10:36 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1077 - T85757 (duration: 00m 44s) [production]
10:33 <jynus> upgrade and restart db2047 T214264 [production]
10:26 <addshore@deploy1001> Synchronized php-1.33.0-wmf.13/extensions/ArticlePlaceholder/includes/AboutTopicRenderer.php: T213739 Pass a usageAccumulator to SidebarGenerator (duration: 00m 47s) [production]
10:19 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1089 (duration: 00m 45s) [production]
09:57 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give more traffic to db1089 (duration: 00m 45s) [production]
09:42 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Slowly Repool db1089 T210478 (duration: 00m 45s) [production]
09:30 <marostegui> Compress a few tables on dbstore1003:3315 - T210478 [production]
08:35 <marostegui> Stop replication db1077 to deploy schema change - T85757 [production]
08:31 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1077 - T85757 (duration: 00m 46s) [production]
08:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1123 - T85757 (duration: 00m 48s) [production]
08:10 <moritzm> installing OpenSSL security updates [production]
07:39 <marostegui> Stop replication on db1124:3313 to fix triggers - T85757 [production]
07:00 <marostegui> Stop MySQL on db1089 to clone dbstore1003 - T210478 [production]
07:00 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1089 T210478 (duration: 00m 47s) [production]
06:54 <marostegui> Deploy schema change on db1123 - T85757 [production]
06:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1123 - T85757 (duration: 00m 50s) [production]
06:47 <marostegui> Drop tag_summary table from db1023, db1077, db1075 and db1078 T212255 [production]
06:45 <vgutierrez@puppetmaster1001> conftool action : set/pooled=no; selector: name=cp5010.eqsin.wmnet [production]
06:32 <marostegui> Drop tag_summary table from db1095:3313 - T212255 [production]
06:27 <marostegui> Drop tag_summary table from dbstore1002:s3 - T212255 [production]
06:12 <marostegui> Drop tag_summary table from s3 codfw - T212255 [production]
06:08 <marostegui> tag_summary table from s8 - T212255 [production]
2019-01-20 §
15:13 <marostegui> Force WriteBack on db2040 - T214264 [production]
01:07 <cdanis> cdanis@wdqs1004.eqiad.wmnet /var/log/wdqs % sudo service wdqs-blazegraph restart [production]
2019-01-19 §
22:12 <ariel@deploy1001> Finished deploy [dumps/dumps@ab79bbb]: multistream dumps in parallel, recombine gz and multistream without decompression (duration: 00m 03s) [production]
22:12 <ariel@deploy1001> Started deploy [dumps/dumps@ab79bbb]: multistream dumps in parallel, recombine gz and multistream without decompression [production]
20:34 <gtirloni> upgraded and rebooted labstore200{3,4} [production]
12:34 <onimisionipe> pool maps1003 - stretch migration is complete T198622 [production]