2019-09-05
§
|
09:05 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: Promote wikidatawiki to 1.34.0-wmf.21 for T232035 - T220746 |
[production] |
09:04 |
<vgutierrez> |
rolling back from ats-tls to nginx on cp3034 - T231433 |
[production] |
08:55 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: Rollback wikidatawiki to 1.34.0-wmf.20 for T232035 |
[production] |
08:38 |
<oblivian@puppetmaster1001> |
conftool action : set/pooled=false; selector: dnsdisc=a.*-ro,name=codfw |
[production] |
08:37 |
<jmm@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
08:35 |
<jmm@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
08:32 |
<akosiaris> |
depool restbase1022 T232007 |
[production] |
08:30 |
<vgutierrez> |
rebooting cp3034 |
[production] |
08:23 |
<vgutierrez> |
repooling cp3034 |
[production] |
08:21 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: Promote wikidatawiki to 1.34.0-wmf.21 for T232035 - T220746 |
[production] |
08:16 |
<moritzm> |
reimage restbase-dev1004 to Stretch T224554 |
[production] |
08:13 |
<_joe_> |
upgrading scap on deploy1001 |
[production] |
08:09 |
<vgutierrez> |
depooling cp3034 due to intermittent network issues |
[production] |
07:57 |
<_joe_> |
upgrading scap on mwdebug1001 |
[production] |
07:56 |
<_joe_> |
uploading scap 3.12.1 to reprepro on all distros 224857 |
[production] |
07:56 |
<hashar> |
Switching "wikidatawiki" on mwdebug1001 to 1.34.0-wmf.21 by editing /srv/mediawiki/wikiversions.php # T232035 |
[production] |
07:53 |
<marostegui> |
Remove old backups for db2037 and db2042 from dbprov2001 |
[production] |
07:45 |
<marostegui> |
Remove puppet grants from m1 for the following IPs: 10.64.0.165 10.64.16.159 10.64.16.18 T231539 |
[production] |
07:32 |
<moritzm> |
upgrading mw1293-mw1296, mw1299-mw1306 to PHP 7.2.22 |
[production] |
07:31 |
<mutante> |
ununpentium - removed /etc/envoy/envoy.yaml; ran /usr/local/sbin/build-envoy-config -c /etc/envoy to regenarate config without 443 listener; ran puppet; envoy now running on jessie |
[production] |
07:07 |
<mutante> |
ununpentium - manually delete /etc/envoy/listeners.d/00-tls_terminator_443.yaml after changing port to 1443 - puppet does not remove it |
[production] |
06:44 |
<kart_> |
Updated cxserver to 2019-09-04-065911-production (T213255, T206310) |
[production] |
06:41 |
<@> |
helmfile [EQIAD] Ran 'apply' command on namespace 'cxserver' for release 'production' . |
[production] |
06:39 |
<@> |
helmfile [CODFW] Ran 'apply' command on namespace 'cxserver' for release 'production' . |
[production] |
06:38 |
<@> |
helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . |
[production] |
05:42 |
<marostegui> |
Remove grants for dbproxy1005 T231280 T231967 |
[production] |
05:31 |
<marostegui> |
Restart MySQL on codfw sanitariums (db1124 and db1125) to pick up new filters - T51195 |
[production] |
05:29 |
<marostegui> |
Restart wikibugs |
[production] |
05:21 |
<mutante> |
ganeti2005 - DRAC reset fails - ipmi_cmd_cold_reset: bad completion code |
[production] |
05:19 |
<mutante> |
ganeti2005 - reset DRAC via local IPMI since mgmt stopped responding |
[production] |
05:14 |
<marostegui> |
Restart MySQL on codfw sanitariums (db2094 and db2095) to pick up new filters - T51195 |
[production] |
04:57 |
<vgutierrez> |
rearming keyholder on cumin1001 |
[production] |
04:42 |
<vgutierrez> |
upgrading ATS to 8.0.5-1wm5 on cp4021 - T231433 |
[production] |
04:37 |
<vgutierrez> |
switching cp4021 from nginx to ats-tls - T231433 |
[production] |
04:31 |
<vgutierrez> |
upgrading ATS to 8.0.5-1wm5 on cp3034 - T231433 |
[production] |
04:20 |
<vgutierrez> |
switching cp3034 from nginx to ats-tls - T231433 |
[production] |
04:02 |
<vgutierrez> |
upgrading ATS to 8.0.5-1wm5 on cp1076 - T231433 |
[production] |
03:57 |
<vgutierrez> |
switching cp1076 from nginx to ats-tls - T231433 |
[production] |
00:55 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: CommonSettings: Factor out write of variant config into MWConfigCacheGenerator, part 2 (duration: 00m 53s) |
[production] |
00:54 |
<jforrester@deploy1001> |
Synchronized multiversion/MWConfigCacheGenerator.php: CommonSettings: Factor out write of variant config into MWConfigCacheGenerator, part 1 (duration: 00m 56s) |
[production] |
00:04 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: CommonSettings: Factor out load of variant config into MWConfigCacheGenerator, part 2 (duration: 00m 55s) |
[production] |
00:02 |
<jforrester@deploy1001> |
Synchronized multiversion/MWConfigCacheGenerator.php: CommonSettings: Factor out load of variant config into MWConfigCacheGenerator, part 1 (duration: 00m 55s) |
[production] |
2019-09-04
§
|
23:36 |
<jforrester@deploy1001> |
Synchronized wmf-config/CommonSettings.php: CommonSettings: Factor out variant config generation into MWConfigCacheGenerator, part 2 (duration: 00m 55s) |
[production] |
23:33 |
<jforrester@deploy1001> |
Synchronized multiversion/MWConfigCacheGenerator.php: CommonSettings: Factor out variant config generation into MWConfigCacheGenerator, part 1 (duration: 00m 54s) |
[production] |
23:05 |
<urandom> |
decommission restbase-dev1004-b (Cassandra) -- T224554 |
[production] |
21:58 |
<andrewbogott> |
attached to console on cumin1001, found it in bios 'system settings', exited, allowed boot to continue. No idea how it got there — spontaneous reboot? |
[production] |
21:12 |
<crusnov@deploy1001> |
Finished deploy [netbox/deploy@367ca84]: (no justification provided) (duration: 08m 55s) |
[production] |
21:03 |
<crusnov@deploy1001> |
Started deploy [netbox/deploy@367ca84]: (no justification provided) |
[production] |
20:14 |
<urandom> |
decommission restbase-dev1004-a (Cassandra) -- T224554 |
[production] |
20:00 |
<@> |
helmfile [STAGING] Ran 'apply' command on namespace 'sessionstore' for release 'staging' . |
[production] |