2019-01-16
§
|
08:18 |
<akosiaris> |
depool codfw zotero for helm release cleanups |
[production] |
08:15 |
<marostegui> |
Upgrade MySQL on db2043 (s3 codfw master) |
[production] |
08:11 |
<elukey> |
drop unneeded tables from the staging db on dbstore1002 according to T212493#4883535 |
[production] |
07:36 |
<vgutierrez> |
powercycling cp1088 - T203194 |
[production] |
07:27 |
<marostegui> |
Drop table tag_summary from s2 - T212255 |
[production] |
07:14 |
<marostegui> |
Upgrade MySQL on db2050 and db2036 |
[production] |
06:07 |
<SMalyshev> |
started transfer wdqs2005->2006 |
[production] |
06:06 |
<marostegui> |
Deploy schema change on db1067 (s1 primary master) - T85757 |
[production] |
06:01 |
<SMalyshev> |
depooling wdq2005 and wdqs2006 for T213854 |
[production] |
01:02 |
<SMalyshev> |
repooled wdqs200[45] for now, 2006 still not done, will get to it later today |
[production] |
00:15 |
<mobrovac@deploy1001> |
Finished deploy [restbase/deploy@a04ebdd]: Restart RESTBase to pick up the fact that restbase1016 is not there - T212418 (duration: 21m 34s) |
[production] |
2019-01-15
§
|
23:54 |
<mobrovac@deploy1001> |
Started deploy [restbase/deploy@a04ebdd]: Restart RESTBase to pick up the fact that restbase1016 is not there - T212418 |
[production] |
22:53 |
<tzatziki> |
removing one file for legal compliance |
[production] |
22:50 |
<jforrester@deploy1001> |
Synchronized php-1.33.0-wmf.13/extensions/WikibaseMediaInfo/resources/filepage/CaptionsPanel.js: Hot-deploy Ibb1f763f to unbreak setting captions on WikibaseMediaInfo (duration: 00m 51s) |
[production] |
22:39 |
<SMalyshev> |
repooled wdqs1008 |
[production] |
21:49 |
<XioNoX> |
re-activate BGP to Zayo on cr1-eqiad - T212791 |
[production] |
21:39 |
<SMalyshev> |
depooling wdqs2005 for T213854 |
[production] |
21:23 |
<mutante> |
contint1001 rmdir /srv/org/wikimedia/integration/coverage ; rmdir /srv/org/wikimedia/integration/logs (T137890) |
[production] |
21:21 |
<mutante> |
doc.wikimedia.org httpd config has been removed from contint1001, is now on doc1001 |
[production] |
21:13 |
<dduvall@deploy1001> |
rebuilt and synchronized wikiversions files: group0 to 1.33.0-wmf.13 |
[production] |
21:09 |
<dduvall@deploy1001> |
Finished scap: testwiki to php-1.33.0-wmf.13 and rebuild l10n cache (duration: 32m 42s) |
[production] |
20:36 |
<dduvall@deploy1001> |
Started scap: testwiki to php-1.33.0-wmf.13 and rebuild l10n cache |
[production] |
20:33 |
<dduvall@deploy1001> |
Pruned MediaWiki: 1.33.0-wmf.8 (duration: 03m 04s) |
[production] |
20:30 |
<dduvall@deploy1001> |
Pruned MediaWiki: 1.33.0-wmf.6 (duration: 09m 15s) |
[production] |
19:36 |
<SMalyshev> |
started copying wdqs1008->wdqs2004 for T213854 |
[production] |
19:28 |
<SMalyshev> |
depooling wdqs1008 and wdqs2004 for DB copying for T213854 |
[production] |
18:52 |
<bblack> |
authdns-update for https://gerrit.wikimedia.org/r/c/operations/dns/+/484546 (make normal git stuff match manual changes already in place) |
[production] |
18:44 |
<hashar> |
[2019-01-15 18:44:06,959] [main] INFO com.google.gerrit.pgm.Daemon : Gerrit Code Review 2.15.6-5-g4b9c845200 ready |
[production] |
18:43 |
<hashar> |
Restarting Gerrit to catch up with a DNS change with the database |
[production] |
18:43 |
<volans> |
restarted debmonitor on debmonitor1001 |
[production] |
18:40 |
<bblack> |
DNS manually updated for m1-master -> dbproxy1006 and m2-master -> dbproxy1007 |
[production] |
17:26 |
<godog> |
roll-restart logstash in eqiad - T213081 |
[production] |
17:21 |
<godog> |
depool logstash1007 before restarting logstash - T213081 |
[production] |
17:13 |
<godog> |
set partitions to 3 for existing kafka-logging topics - T213081 |
[production] |
17:06 |
<XioNoX> |
move back cr1-eqiad:xe-4/1/3 to xe-3/3/1 - T212791 |
[production] |
16:57 |
<XioNoX> |
move cr1-eqiad:xe-3/3/1 to xe-4/1/3 - T212791 |
[production] |
16:52 |
<jynus> |
stop db1115 for hw maintenance |
[production] |
16:50 |
<godog> |
roll-restart kafka-logging in eqiad to apply new topic defaults - T213081 |
[production] |
16:00 |
<jynus> |
stop es1019 for hw maintenance T213422 |
[production] |
15:53 |
<dcausse> |
T210381: elastic search clusters, catching up updates since first import on new psi&omega clusters in eqiad&codfw (from mwmaint1002) |
[production] |
15:10 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@UNKNOWN]: reverting deploy of 0.26.3-wikimedia1 (duration: 00m 32s) |
[production] |
15:10 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@UNKNOWN]: reverting deploy of 0.26.3-wikimedia1 |
[production] |
15:02 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@9d6156a]: reverting deploy of 0.26.3-wikimedia1 (duration: 06m 06s) |
[production] |
15:01 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1103 (duration: 00m 48s) |
[production] |
14:56 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@9d6156a]: reverting deploy of 0.26.3-wikimedia1 |
[production] |
14:41 |
<fdans@deploy1001> |
Finished deploy [analytics/superset/deploy@408a30e]: deploying 0.26.3-wikimedia1 (duration: 00m 36s) |
[production] |
14:40 |
<fdans@deploy1001> |
Started deploy [analytics/superset/deploy@408a30e]: deploying 0.26.3-wikimedia1 |
[production] |
14:14 |
<moritzm> |
rebooting acamar |
[production] |
13:53 |
<marostegui> |
Downtime db1115 and es1019 for 4 hours - T196726 T213422 |
[production] |
13:33 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1119 T85757 (duration: 00m 46s) |
[production] |