2018-11-14
§
|
09:59 |
<banyek> |
depooling db2046 (T85757) |
[production] |
09:22 |
<moritzm> |
updated stretch netinst image for 9.6 point release |
[production] |
09:17 |
<marostegui> |
Deploy schema change on db2053 - T86339 |
[production] |
08:24 |
<marostegui> |
Deploy schema change on s5 codfw master, this will generate lag on s5 codfw - T205913 |
[production] |
08:22 |
<marostegui> |
Deploy schema change on s2 codfw master, this will generate lag on s7 codfw - T205913 |
[production] |
08:19 |
<marostegui> |
Deploy schema change on s2 codfw master, this will generate lag on s2 codfw - T205913 |
[production] |
08:17 |
<marostegui> |
Deploy schema change on s6 codfw master, this will generate lag on s6 codfw - T205913 |
[production] |
08:14 |
<marostegui> |
Deploy schema change on s4 codfw master, this will generate lag on s4 codfw - T205913 |
[production] |
08:08 |
<marostegui> |
Deploy schema change on s3 codfw master, this will generate lag on s3 codfw - T205913 |
[production] |
08:07 |
<godog> |
rollout rsyslog_exporter to eqiad |
[production] |
07:42 |
<marostegui> |
Deploy schema change on s3 codfw master, this will generate lag on s3 codfw - T203709 |
[production] |
07:18 |
<marostegui> |
Deploy schema change on s7 codfw master, this will generate lag on s7 codfw - T203709 |
[production] |
07:07 |
<marostegui> |
Deploy schema change on s2 codfw master, this will generate lag on s2 codfw - T203709 |
[production] |
06:52 |
<marostegui> |
Deploy schema change on s4 codfw master, this will generate lag on s4 codfw - T203709 |
[production] |
06:40 |
<marostegui> |
Deploy schema change on s6 codfw master, this will generate lag on s6 codfw -T203709 |
[production] |
06:31 |
<marostegui> |
Stop MySQL on pc2005 to clone it to pc2008 - T208383 |
[production] |
06:27 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-codfw.php: Depool pc2005 - T208383 (duration: 01m 04s) |
[production] |
05:46 |
<_joe_> |
restarting gerrit |
[production] |
01:02 |
<thcipriani@deploy1001> |
Synchronized php-1.33.0-wmf.3/extensions/Wikibase/client/includes: SWAT: [[gerrit:473166|Update: use wikibase-debug logger instead of "PageRandomLookup"]] T208796 (duration: 00m 56s) |
[production] |
00:42 |
<mutante> |
restarted smokeping on netmon1002 and netmon2001 |
[production] |
2018-11-13
§
|
22:42 |
<XioNoX> |
restart librenms irc bot |
[production] |
22:24 |
<XioNoX> |
add term labnet-nova-api to cloud-in4 on cr1/2-eqiad - T209424 |
[production] |
20:22 |
<herron> |
updated labs realm smarthosts (via hiera) to mx-out0[12].wmflabs.org T41785 |
[production] |
19:49 |
<otto@deploy1001> |
Finished deploy [analytics/refinery@62d6f4b]: Deploy hive jars from CDH 5.10.0 to workaround Refine bug: T209407 (duration: 05m 57s) |
[production] |
19:43 |
<otto@deploy1001> |
Started deploy [analytics/refinery@62d6f4b]: Deploy hive jars from CDH 5.10.0 to workaround Refine bug: T209407 |
[production] |
19:31 |
<herron> |
uploaded librdkafka_0.11.6-1~bpo9+1+wikimedia1 packages to stretch-wikimedia T209300 |
[production] |
18:11 |
<mutante> |
the CUSTOM message from ores.svc.codfw was the (one-time) test of the new Icinga server |
[production] |
18:03 |
<mutante> |
icinga migration has concluded, we are now on stretch and icinga1001, einsteinium is passive (T202782) |
[production] |
17:27 |
<mutante> |
re-enabled puppet on icinga1001, einsteinium becoming passive |
[production] |
17:21 |
<mutante> |
ran puppet on einsteniumr; e-enabling puppet on tegmen and icinga1001 |
[production] |
17:13 |
<bstorm_> |
Added 172.16.0.0/21 to the allowed connections for wikilabels postgresql on labsdb1004 |
[production] |
17:04 |
<mutante> |
disabled puppet on all 3 icinga servers, re-enabling on einsteinium , going through https://wikitech.wikimedia.org/wiki/Icinga#Failover_Icinga_between_the_active_and_passive_servers |
[production] |
17:02 |
<ejegg> |
updated payments-wiki from 20542c9184 to 5751286f1c |
[production] |
17:01 |
<mutante> |
starting migration of icinga server - maintenance windows |
[production] |
16:33 |
<thcipriani> |
restarting gerrit service for upgrade to 2.15.6 |
[production] |
16:32 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt (duration: 00m 10s) |
[production] |
16:32 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to cobalt |
[production] |
16:29 |
<thcipriani@deploy1001> |
Finished deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 (duration: 00m 11s) |
[production] |
16:29 |
<thcipriani@deploy1001> |
Started deploy [gerrit/gerrit@d2763c6]: v2.15.6 to gerrit2001 |
[production] |
16:22 |
<anomie@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Setting actor migration to write-both/read-old on test wikis and mediawikiwiki (T188327) (duration: 00m 54s) |
[production] |
16:07 |
<anomie@mwmaint1002> |
Running refreshExternallinksIndex.php on labtestwiki for T209373 |
[production] |
16:07 |
<anomie@mwmaint1002> |
Running refreshExternallinksIndex.php on section 3 wikis in group 0 for T209373 |
[production] |
15:48 |
<_joe_> |
upgrading extensions on all appservers / jobrunners while upgrading to php 7.2 |
[production] |
15:45 |
<gehel> |
restart tilerator on maps1004 |
[production] |
15:21 |
<moritzm> |
draining ganeti1006 for reboot/kernel security update |
[production] |
15:18 |
<marostegui> |
Restore replication consistency options on dbstore2002:3313 as it has caught up - T208320 |
[production] |
14:59 |
<akosiaris> |
increase the migration downtime for kafkamon1001. It should make live migration of these VMs easier and without the need for manual fiddling |
[production] |
14:54 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: group to 1.33.0-wmf.4 | T206658 |
[production] |
14:40 |
<hashar@deploy1001> |
Finished scap: testwiki to php-1.33.0-wmf.4 | T206658 (duration: 19m 34s) |
[production] |
14:27 |
<moritzm> |
draining ganeti1007 for reboot/kernel security update |
[production] |