2018-11-13
ยง
|
18:14 |
<andrewbogott> |
moving integration-slave-docker-1038 to eqiad1-r |
[integration] |
18:12 |
<andrewbogott> |
moving integration-slave-docker-1034 to eqiad1-r |
[integration] |
18:11 |
<mutante> |
the CUSTOM message from ores.svc.codfw was the (one-time) test of the new Icinga server |
[production] |
18:03 |
<mutante> |
icinga migration has concluded, we are now on stretch and icinga1001, einsteinium is passive (T202782) |
[production] |
17:40 |
<arturo> |
remove misctools 1.31 and jobutils 1.30 from the stretch-tools repo (T207970) |
[tools] |
17:27 |
<mutante> |
re-enabled puppet on icinga1001, einsteinium becoming passive |
[production] |
17:21 |
<mutante> |
ran puppet on einsteniumr; e-enabling puppet on tegmen and icinga1001 |
[production] |
17:13 |
<bstorm_> |
Added 172.16.0.0/21 to the allowed connections for wikilabels postgresql on labsdb1004 |
[production] |
17:04 |
<mutante> |
disabled puppet on all 3 icinga servers, re-enabling on einsteinium , going through https://wikitech.wikimedia.org/wiki/Icinga#Failover_Icinga_between_the_active_and_passive_servers |
[production] |
17:02 |
<ejegg> |
updated payments-wiki from 20542c9184 to 5751286f1c |
[production] |
17:01 |
<mutante> |
starting migration of icinga server - maintenance windows |
[production] |
16:33 |
<thcipriani> |
restarting gerrit service for upgrade to 2.15.6 |
[production] |
16:32 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt (duration: 00m 10s) |
[production] |
16:32 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt |
[production] |
16:29 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 (duration: 00m 11s) |
[production] |
16:29 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 |
[production] |
16:22 |
<anomie@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Setting actor migration to write-both/read-old on test wikis and mediawikiwiki (T188327) (duration: 00m 54s) |
[production] |
16:07 |
<anomie@mwmaint1002> |
Running refreshExternallinksIndex.php on labtestwiki for T209373 |
[production] |
16:07 |
<anomie@mwmaint1002> |
Running refreshExternallinksIndex.php on section 3 wikis in group 0 for T209373 |
[production] |
15:53 |
<halfak> |
ran "sudo service uwsgi-wikilabels-web restart on wikilabels-02" |
[wikilabels] |
15:48 |
<_joe_> |
upgrading extensions on all appservers / jobrunners while upgrading to php 7.2 |
[production] |
15:45 |
<gehel> |
restart tilerator on maps1004 |
[production] |
15:21 |
<moritzm> |
draining ganeti1006 for reboot/kernel security update |
[production] |
15:18 |
<marostegui> |
Restore replication consistency options on dbstore2002:3313 as it has caught up - T208320 |
[production] |
14:59 |
<akosiaris> |
increase the migration downtime for kafkamon1001. It should make live migration of these VMs easier and without the need for manual fiddling |
[production] |
14:54 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: group to 1.33.0-wmf.4 | T206658 |
[production] |
14:40 |
<hashar@deploy1001> |
Finished scap: testwiki to php-1.33.0-wmf.4 | T206658 (duration: 19m 34s) |
[production] |
14:27 |
<moritzm> |
draining ganeti1007 for reboot/kernel security update |
[production] |
14:20 |
<hashar@deploy1001> |
Started scap: testwiki to php-1.33.0-wmf.4 | T206658 |
[production] |
14:20 |
<akosiaris> |
reboot logstash1007, logstash1008, logstash1009 with 500 secs of sleep between them for the migration_downtime ganeti setting to be applied |
[production] |
14:18 |
<akosiaris> |
increase the migration downtime for logstash1007, logstash1008, logstash1009. It should make live migration of these VMs easier and without the need for manual fiddling |
[production] |
14:15 |
<hashar@deploy1001> |
Pruned MediaWiki: 1.32.0-wmf.24 (duration: 08m 55s) |
[production] |
14:03 |
<hashar> |
Applied security patches to 1.33.0-wmf.4 | T206658 |
[production] |
14:03 |
<gehel> |
start plugin and JVM upgrade on elasticsearch / cirrus / codfw - T209293 |
[production] |
14:00 |
<hashar> |
scap prep 1.33.0-wmf.4 # T206658 |
[production] |
13:58 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Pool pc2007 to replace pc2004 (duration: 00m 48s) |
[production] |
13:41 |
<marostegui> |
Deploy schema change on s8 codfw master (db2045) this will generate lag on s8 codfw - T203709 |
[production] |
13:40 |
<hashar> |
Cutting wmf/1.33.0-wmf.4 branch | T206658 |
[production] |
13:32 |
<gtirloni> |
pointed mail.tools.wmflabs.org to new IP 208.80.155.158 |
[tools] |
13:30 |
<moritzm> |
draining ganeti1008 for reboot/kernel security update |
[production] |
13:29 |
<gtirloni> |
Changed active mail relay to tools-mail-02 (T209356) |
[tools] |
13:22 |
<arturo> |
T207970 misctools and jobutils v1.32 are now in both `stretch-tools` and `stretch-toolsbeta` repos in tools-services-01 |
[tools] |
13:05 |
<arturo> |
T207970 there is now a `stretch-toolsbeta` repo in tools-services-01, still empty |
[tools] |
13:02 |
<arturo> |
a puppet refactor for the aptly module may have caused some puppet issues. Should be solved now |
[mwv-apt] |
13:01 |
<arturo> |
a puppet refactor for the aptly module may have caused some puppet issues. Should be solved now |
[releng] |
13:01 |
<arturo> |
a puppet refactor for the aptly module may have caused some puppet issues. Should be solved now |
[deployment-prep] |
12:59 |
<arturo> |
the puppet issue has been solved by reverting the code |
[tools] |
12:51 |
<phuedx> |
European Mid-day SWAT finished |
[production] |
12:50 |
<phuedx@deploy1001> |
Finished scap: SWAT: [[gerrit:473164|Define WikimediaMessages for Wikibase SEO change]] l18n refresh (duration: 21m 43s) |
[production] |
12:28 |
<arturo> |
puppet broken in toolforge due to a refactor. Will be fixed in a bit |
[tools] |