2021-10-04
§
|
07:29 |
<joal@deploy1002> |
Finished deploy [analytics/refinery@38f3adc]: Hotfix analytics deploy [analytics/refinery@38f3adc] (duration: 19m 18s) |
[production] |
07:19 |
<dcausse> |
restarting blazegraph on wdqs2001 & wdqs2004 (allocators burning too quickly) |
[production] |
07:18 |
<elukey> |
depool + restart blazegraph + restart updater for wdqs1006 |
[production] |
07:18 |
<elukey@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: name=wdqs1006.wmnet |
[production] |
07:18 |
<elukey@puppetmaster1001> |
conftool action : set/pooled=inactive; selector: name=wdqs1004.wmnet |
[production] |
07:10 |
<joal@deploy1002> |
Started deploy [analytics/refinery@38f3adc]: Hotfix analytics deploy [analytics/refinery@38f3adc] |
[production] |
07:02 |
<godog> |
swift eqiad-prod: add weight to ms-be10[64-67] - T290546 |
[production] |
06:44 |
<elukey> |
depool + restart blazegraph + restart updater on wdqs1004 |
[production] |
05:50 |
<ladsgroup@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
05:49 |
<ladsgroup@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
05:47 |
<ladsgroup@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . |
[production] |
2021-10-01
§
|
23:19 |
<bd808@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'toolhub' for release 'main' . |
[production] |
22:27 |
<mutante> |
puppetmaster2001 - systemctl reset-failed |
[production] |
22:16 |
<mutante> |
puppetmaster2001 systemctl disable geoip_update_ipinfo.timer |
[production] |
22:15 |
<mutante> |
puppetmaster2001 - sudo /usr/local/bin/geoipupdate_job after adding new shell command and timer - succesfully downloaded enterprise database for T288844 |
[production] |
21:56 |
<bd808@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'toolhub' for release 'main' . |
[production] |
21:44 |
<mutante> |
puppetmasters - temp. disabling puppet one more time, now for a different deploy, to fetch an additional MaxMind database - T288844 |
[production] |
21:19 |
<mutante> |
puppetmaster2001 - puppet removed cron sync_volatile and cron sync_ca - starting and verifying new timers: 'systemctl status sync-puppet-volatile', 'systemctl status sync-puppet-ca' T273673 |
[production] |
21:12 |
<mutante> |
puppetmaster1002, puppetmaster1003, puppetmaster2002, puppetmaster2003: re-enabled puppet, they are backends. backends don't have the sync cron/job/timer, so noop as well, just like 1004/1005/2004/2005. this just leaves the actual change on 2001 - T273673 |
[production] |
21:07 |
<mutante> |
puppetmaster1004, puppetmaster1005, puppetmaster2004, puppetmaster2005: re-enabled puppet, they are "insetup" role |
[production] |
21:06 |
<mbsantos@deploy1002> |
Finished deploy [kartotherian/deploy@d309a6e] (eqiad): tegola: reduce load to 50% during the weekend (duration: 00m 54s) |
[production] |
21:05 |
<mbsantos@deploy1002> |
Started deploy [kartotherian/deploy@d309a6e] (eqiad): tegola: reduce load to 50% during the weekend |
[production] |
21:05 |
<mutante> |
puppetmaster1001 - re-enabled puppet, noop as expected, the passive host pulls from the active one, so only 2001 has the cron/job/timer |
[production] |
21:05 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
21:02 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
21:01 |
<legoktm@deploy1002> |
Synchronized wmf-config/CommonSettings.php: Revert "Have PdfHandler use Shellbox on Commons for 10% of requests" (duration: 00m 59s) |
[production] |
20:58 |
<mutante> |
temp disabling puppet on puppetmasters - deploying gerrit:724115 (gerrit:723310) T273673 |
[production] |
18:58 |
<robh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-db1002.eqiad.wmnet with reason: REIMAGE |
[production] |
18:56 |
<robh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-db1001.eqiad.wmnet with reason: REIMAGE |
[production] |
18:55 |
<robh@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-db1002.eqiad.wmnet with reason: REIMAGE |
[production] |
18:53 |
<robh@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-db1001.eqiad.wmnet with reason: REIMAGE |
[production] |
18:07 |
<robh@cumin1001> |
END (ERROR) - Cookbook sre.experimental.reimage (exit_code=97) for host an-db1001.eqiad.wmnet |
[production] |
18:05 |
<robh@cumin1001> |
START - Cookbook sre.experimental.reimage for host an-db1001.eqiad.wmnet |
[production] |
17:58 |
<effie> |
depool mw1025, mw1319, mw1312 for test |
[production] |
16:20 |
<dancy> |
testing upcoming Scap 4.0.2 release on beta |
[production] |
14:04 |
<bblack> |
C:envoyproxy (appservers and others): restarting envoyproxy |
[production] |
14:04 |
<bblack> |
C:envoyproxy (appservers and others): ca-certificates updated via cumin to workaround T292291 issues |
[production] |
13:45 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
13:45 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
13:23 |
<bblack> |
manually trying LE expired root workaround on mwdebug1001 with puppet disabled ... |
[production] |
13:12 |
<gehel@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
13:11 |
<gehel@cumin1001> |
END (ERROR) - Cookbook sre.wdqs.data-reload (exit_code=97) |
[production] |
13:11 |
<gehel@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
13:10 |
<gehel@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |