2019-01-22
§
|
12:19 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:484021|Create extra namespace in kawiktionary (T212956)]] (duration: 00m 46s) |
[production] |
12:13 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:485043|Enable transwiki user group on ne.wikipedia (T214036)]] (duration: 00m 47s) |
[production] |
12:09 |
<jynus> |
running mariabackup on dbstore1001:s1 |
[production] |
12:02 |
<Lucas_WMDE> |
tried and failed to deploy patch for T212118 |
[production] |
10:55 |
<marostegui> |
Deploy schema change on db1098:3316 - T210713 |
[production] |
10:55 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1098:3316 T210713 (duration: 00m 45s) |
[production] |
10:20 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: T204031 wikidata: post edit constraint jobs on 25% of edits (duration: 00m 45s) |
[production] |
10:15 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: T209504 Decrease WBQualityConstraintsTypeCheckMaxEntities from 300 to 150 (duration: 00m 47s) |
[production] |
10:08 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: T204031 wikidata: post edit constraint jobs on 10% of edits (duration: 00m 47s) |
[production] |
09:59 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1096:3316 T210713 (duration: 00m 47s) |
[production] |
09:56 |
<gehel@puppetmaster1001> |
conftool action : set/pooled=yes; selector: dc=eqiad,cluster=maps,name=maps1003.eqiad.wmnet |
[production] |
09:55 |
<gehel> |
repooling maps1003 after upgrade to stretch - T198622 |
[production] |
09:40 |
<marostegui> |
Deploy schema change on db1096:3316 - T210713 |
[production] |
09:40 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1096:3316 T210713 (duration: 00m 48s) |
[production] |
09:23 |
<jynus> |
stop upgrade and restart db1097 |
[production] |
08:55 |
<dcausse> |
elasticsearch: closing indices in search-chi@(eqiad|codfw) moved to other elastic instances (T214052) |
[production] |
08:53 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1097 (duration: 00m 45s) |
[production] |
08:42 |
<moritzm> |
installing policykit-1 security updates on trusty |
[production] |
08:26 |
<marostegui> |
Deploy schema change on dbstore1001:3316 - T210713 |
[production] |
08:21 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1090:3317 T210478 (duration: 00m 48s) |
[production] |
08:14 |
<marostegui> |
Compress s7 on dbstore1003 - T210478 |
[production] |
06:42 |
<marostegui> |
Deploy schema change on db1078 (s3 master) - T85757 |
[production] |
06:36 |
<marostegui> |
Stop MySQL on db1090:3317 to clone dbstore1003 - T210478 |
[production] |
06:36 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1090:3317 T210478 (duration: 00m 49s) |
[production] |
05:45 |
<kartik@deploy1001> |
Finished deploy [cxserver/deploy@e0ca16b]: Update cxserver to c5ff0bf (duration: 04m 15s) |
[production] |
05:40 |
<kartik@deploy1001> |
Started deploy [cxserver/deploy@e0ca16b]: Update cxserver to c5ff0bf |
[production] |
02:17 |
<onimisionipe> |
restarting tilerator on maps100[1-2] |
[production] |
00:38 |
<chaomodus> |
stat1007 nagios-srpe-server was off and alerted, restarting fixed it |
[production] |
2019-01-21
§
|
22:33 |
<krinkle@deploy1001> |
Synchronized php-1.33.0-wmf.13/extensions/TemplateData/includes/api/ApiTemplateData.php: I7647ddfc47 - T213953 (duration: 00m 47s) |
[production] |
19:35 |
<jynus@deploy1001> |
Synchronized wmf-config/db-codfw.php: Repool db2040 (duration: 00m 45s) |
[production] |
19:23 |
<jynus> |
mysql.py -h db1115 zarcillo -e "UPDATE masters SET instance = 'db2047' WHERE section = 's7' and dc = 'codfw'" T214264 |
[production] |
18:55 |
<jynus> |
stop and upgrade db2040 T214264 |
[production] |
18:52 |
<onimisionipe> |
pool maps1003 - postgresql sql lag issues has been fixed |
[production] |
18:24 |
<jynus@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool db2040, promote db2047 to s7 master (duration: 00m 46s) |
[production] |
17:51 |
<jynus> |
stop and apply puppet changes to db2047 T214264 |
[production] |
17:44 |
<jynus> |
stop replication on db2040 for master switch T214264 |
[production] |
17:16 |
<jynus> |
stop and upgrade db2054 |
[production] |
16:03 |
<arturo> |
T214303 reimaging/renaming labtestneutron2002.codfw.wmnet (jessie) to cloudnet2002-dev.codfw.wmnet (stretch) |
[production] |
15:58 |
<onimisionipe> |
reinitializing slave replication(postgres) on maps1003 |
[production] |
15:52 |
<jynus> |
stop and upgrade db2061 |
[production] |
15:19 |
<dcausse> |
closing frwikiquote_* indices on elasticsearch search-chi@codfw (T214052) |
[production] |
15:11 |
<dcausse> |
closing frwikiquote_* indices on elasticsearch search-chi@eqiad (T214052) |
[production] |
13:58 |
<marostegui> |
Compress enwiki on dbstore1003:3311 - T210478 |
[production] |
12:36 |
<jijiki> |
Restarting memcached on mc1025 to apply '-R 200' - T208844 |
[production] |
11:25 |
<onimisionipe> |
depool maps1003 to fix replication lag issues |
[production] |
10:51 |
<elukey> |
disable puppet fleetwide to ease the merge/deploy of a puppet admin module change - T212949 |
[production] |
10:36 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1077 - T85757 (duration: 00m 44s) |
[production] |
10:33 |
<jynus> |
upgrade and restart db2047 T214264 |
[production] |
10:26 |
<addshore@deploy1001> |
Synchronized php-1.33.0-wmf.13/extensions/ArticlePlaceholder/includes/AboutTopicRenderer.php: T213739 Pass a usageAccumulator to SidebarGenerator (duration: 00m 47s) |
[production] |
10:19 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Fully repool db1089 (duration: 00m 45s) |
[production] |