2018-10-05
§
|
23:50 |
<bblack> |
<<<<<<< repooling eqiad edge caches, a few days ahead of intended switchback next Weds, to alleviate some traffic engineering concerns over the weekend >>>>>> |
[production] |
20:48 |
<mutante> |
T191183 - it's still showing the error page as before but that isn't due to apache issues, it just needs additional ferm rules |
[production] |
20:44 |
<mutante> |
gerrit - adding gerrit.wmfusercontent.org virtual host for avatars. applied first on gerrit2001, then on cobalt (T191183) |
[production] |
20:03 |
<ejegg> |
updated fundraising CiviCRM from ebc2e0076c to 7a0d14015e |
[production] |
19:48 |
<banyek> |
repooling labsdb1009 (T195747) |
[production] |
19:44 |
<smalyshev@deploy1001> |
Finished deploy [wdqs/wdqs@f8776de]: Redeploy 1009 (duration: 00m 26s) |
[production] |
19:44 |
<smalyshev@deploy1001> |
Started deploy [wdqs/wdqs@f8776de]: Redeploy 1009 |
[production] |
18:37 |
<bblack> |
authdns2001: upgraded gdnsd to 2.99.9930-beta |
[production] |
18:31 |
<bblack> |
gdnsd-2.99.9930-beta-1+wmf1 uploaded to stretch-wikimedia |
[production] |
18:26 |
<mutante> |
icinga - noop on all servers, no change, puppet re-enabled, operations normal |
[production] |
18:08 |
<mutante> |
disabling puppet on icinga for 5 min for extra safety before a change that should be noop |
[production] |
17:58 |
<banyek> |
depooling labsdb1009 (T195747) |
[production] |
17:50 |
<banyek> |
repooling labsdb1011 (T195747) |
[production] |
17:12 |
<elukey> |
set etcd in codfw as read/write (was readonly) and eqiad as readonly (was read/write) |
[production] |
14:57 |
<banyek> |
depooling labsdb1011 (T195747) |
[production] |
14:56 |
<banyek> |
depooling labsdb1011 |
[production] |
13:26 |
<banyek> |
adding wmf-pt-kill_2.2.20-1+wmf3 package for stretch |
[production] |
13:25 |
<moritzm> |
installing python3.5/2.7 security updates |
[production] |
13:02 |
<volans> |
upgraded spicerack to version 0.0.9 on sarin/neodymium/cumin* - T199079 |
[production] |
12:13 |
<vgutierrez> |
Creating certcentral1001.eqiad.wmnet in ganeti - T206308 |
[production] |
12:12 |
<vgutierrez> |
Creating certcentral2001.codfw.wmnet in ganeti - T206308 |
[production] |
11:59 |
<elukey> |
deleted bohrium from ganeti via gnt-instance |
[production] |
11:43 |
<moritzm> |
rebooting wezen for kernel security update |
[production] |
11:29 |
<moritzm> |
rebooting ruthenium for kernel security update |
[production] |
10:40 |
<jynus> |
restarting replication on labsdb1010/1 on s3 and s5 |
[production] |
10:37 |
<volans> |
uploaded spicerack_0.0.9-1{,+deb9u1} to apt.wikimedia.org {jessie,stretch}-wikimedia - T199079 |
[production] |
10:17 |
<moritzm> |
rearmed keyholder on netmon2001 |
[production] |
10:10 |
<elukey> |
restart confd on labs-puppetmaster to pick up new etcd settings (eqiad -> codfw) |
[production] |
10:03 |
<_joe_> |
restarting navtiming.service on webperf1001 to pick up the dns change for etcd |
[production] |
09:37 |
<elukey> |
restart rsyslog on lithium - broken connection to tegmen - T199406 |
[production] |
09:37 |
<banyek> |
disabling puppet on labsdb1009,labsdb1010,labsdb1011 (T203674) |
[production] |
09:36 |
<banyek> |
adding wmf-pt-kill_2.2.20-1+wmf2 package for stretch |
[production] |
09:16 |
<volans> |
rebooting tegmen, console stuck, possible re-occurrence of T199413 (to be confirmed) |
[production] |
09:12 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Move some wikis for s3 to s5 (duration: 00m 56s) |
[production] |
09:06 |
<elukey> |
stop etcdmirror replication on conf2002 |
[production] |
09:05 |
<_joe_> |
restarting confd on all nodes in eqiad and esams |
[production] |
08:58 |
<_joe_> |
wiped cached values for the read-only etcd SRV record |
[production] |
08:56 |
<_joe_> |
read-write connections to etcd only go to codfw now |
[production] |
08:35 |
<_joe_> |
reenabling notifications for etcdmirror on conf1005 |
[production] |
08:02 |
<jynus> |
start replication on db1069 (x1) |
[production] |
07:54 |
<jynus> |
starting replicatios on db1075; db1070, db1070:s3 with disabled gtid |
[production] |
07:50 |
<jynus> |
stopping dbstore1001:x1 |
[production] |
07:33 |
<jynus> |
chaning s3 master for db1070 |
[production] |
07:28 |
<jynus> |
stopping s3 replication on db1070 |
[production] |
07:20 |
<jynus> |
stopping x1 replication on db1069 |
[production] |