2019-01-15
ยง
|
21:09 |
<dduvall@deploy1001> |
Finished scap: testwiki to php-1.33.0-wmf.13 and rebuild l10n cache (duration: 32m 42s) |
[production] |
21:02 |
<bstorm_> |
restarting webservicemonitor on tools-services-02 -- acting funny |
[tools] |
20:36 |
<dduvall@deploy1001> |
Started scap: testwiki to php-1.33.0-wmf.13 and rebuild l10n cache |
[production] |
20:33 |
<dduvall@deploy1001> |
Pruned MediaWiki: 1.33.0-wmf.8 (duration: 03m 04s) |
[production] |
20:30 |
<dduvall@deploy1001> |
Pruned MediaWiki: 1.33.0-wmf.6 (duration: 09m 15s) |
[production] |
19:36 |
<SMalyshev> |
started copying wdqs1008->wdqs2004 for T213854 |
[production] |
19:28 |
<SMalyshev> |
depooling wdqs1008 and wdqs2004 for DB copying for T213854 |
[production] |
18:52 |
<bblack> |
authdns-update for https://gerrit.wikimedia.org/r/c/operations/dns/+/484546 (make normal git stuff match manual changes already in place) |
[production] |
18:46 |
<bd808> |
Dropped A record for www.tools.wmflabs.org and replaced it with a CNAME pointing to tools.wmflabs.org. |
[tools] |
18:44 |
<hashar> |
[2019-01-15 18:44:06,959] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15.6-5-g4b9c845200 ready |
[production] |
18:43 |
<hashar> |
Restarting Gerrit to catch up with a DNS change with the database |
[production] |
18:43 |
<volans> |
restarted debmonitor on debmonitor1001 |
[production] |
18:40 |
<bblack> |
DNS manually updated for m1-master -> dbproxy1006 and m2-master -> dbproxy1007 |
[production] |
18:29 |
<bstorm_> |
T213711 installed python3-requests=2.11.1-1~bpo8+1 python3-urllib3=1.16-1~bpo8+1 on tools-proxy-03, which stopped the bleeding |
[tools] |
17:26 |
<godog> |
roll-restart logstash in eqiad - T213081 |
[production] |
17:21 |
<godog> |
depool logstash1007 before restarting logstash - T213081 |
[production] |
17:13 |
<godog> |
set partitions to 3 for existing kafka-logging topics - T213081 |
[production] |
17:06 |
<XioNoX> |
move back cr1-eqiad:xe-4/1/3 to xe-3/3/1 - T212791 |
[production] |
16:57 |
<XioNoX> |
move cr1-eqiad:xe-3/3/1 to xe-4/1/3 - T212791 |
[production] |
16:52 |
<jynus> |
stop db1115 for hw maintenance |
[production] |
16:50 |
<godog> |
roll-restart kafka-logging in eqiad to apply new topic defaults - T213081 |
[production] |
16:00 |
<jynus> |
stop es1019 for hw maintenance T213422 |
[production] |
15:53 |
<dcausse> |
T210381: elastic search clusters, catching up updates since first import on new psi&omega clusters in eqiad&codfw (from mwmaint1002) |
[production] |
15:10 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@UNKNOWN]: reverting deploy of 0.26.3-wikimedia1 (duration: 00m 32s) |
[production] |
15:10 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@UNKNOWN]: reverting deploy of 0.26.3-wikimedia1 |
[production] |
15:02 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@9d6156a]: reverting deploy of 0.26.3-wikimedia1 (duration: 06m 06s) |
[production] |
15:01 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1103 (duration: 00m 48s) |
[production] |
14:56 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@9d6156a]: reverting deploy of 0.26.3-wikimedia1 |
[production] |
14:56 |
<fdans> |
"rolling back to stable superset" |
[analytics] |
14:55 |
<arturo> |
disable puppet in tools-docker-registry-01 and tools-docker-registry-02, trying with `role::wmcs::toolforge::docker::registry` in the puppetmaster for -03 and -04. The registry shouldn't be affected by this |
[tools] |
14:41 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@408a30e]: deploying 0.26.3-wikimedia1 (duration: 00m 36s) |
[production] |
14:40 |
<fdans> |
deploying superset 0.26.3-wikimedia1 |
[analytics] |
14:40 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@408a30e]: deploying 0.26.3-wikimedia1 |
[production] |
14:36 |
<elukey> |
stop superset to allow a clean mysqldump |
[analytics] |
14:21 |
<arturo> |
T213418 put a backup of the docker registry in NFS just in case: `aborrero@tools-docker-registry-02:$ sudo cp /srv/registry/registry.tar.gz /data/project/.system_sge/docker-registry-backup/` |
[tools] |
14:20 |
<andrewbogott> |
changing tools.wmflabs.org to point to tools-proxy-03 in eqiad1 |
[admin] |
14:14 |
<moritzm> |
rebooting acamar |
[production] |
13:53 |
<marostegui> |
Downtime db1115 and es1019 for 4 hours - T196726 T213422 |
[production] |
13:33 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1119 T85757 (duration: 00m 46s) |
[production] |
13:15 |
<marostegui> |
Deploy schema change on db1119 - T85757 |
[production] |
13:15 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1119 T85757 (duration: 00m 46s) |
[production] |
13:00 |
<elukey> |
restart memcached on mc1024 to pick up new settings (-R 200) - T208844 |
[production] |
12:47 |
<dcausse> |
EU SWAT done |
[production] |
12:36 |
<dcausse@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: T210381: [cirrus] Start writing to psi & omega (take 2) (2/2) (duration: 00m 45s) |
[production] |
12:33 |
<dcausse@deploy1001> |
Synchronized wmf-config/CirrusSearch-production.php: T210381: [cirrus] Start writing to psi & omega (take 2) (1/2) (duration: 00m 45s) |
[production] |
12:15 |
<onimisionipe> |
starting upgrading of prometheus-elasticsearch-exporter for eqiad T210592 |
[production] |
12:14 |
<dcausse@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Change links of wgGEHelpPanelLinks for kowiki T209467 (duration: 00m 46s) |
[production] |
12:09 |
<dcausse@deploy1001> |
Synchronized wmf-config/CommonSettings.php: [cirrus] Add cirrussearch-big-indices tag T210381 (duration: 00m 46s) |
[production] |
12:06 |
<jynus> |
upgrade and restart db1103 |
[production] |
12:03 |
<onimisionipe> |
starting upgrading of prometheus-elasticsearch-exporter for codfw T210592 |
[production] |