2022-02-12
§
|
10:02 |
<jelto> |
update gitlab-runner1001 and gitlab-runner2001 to gitlab-runner 14.7.0 |
[production] |
09:52 |
<jelto> |
update gitlab1001 to gitlab-ce 14.7.2-ce.0 |
[production] |
09:41 |
<jelto> |
update gitlab2001 to gitlab-ce 14.7.2-ce.0 |
[production] |
08:49 |
<elukey> |
truncate /var/log/auth.log to 1g on krb1001 to free space on root partition (original log saved under /srv) |
[production] |
07:23 |
<dcausse> |
restarting blazegraph on wdqs1004 (jvm stuck for 4hours) |
[production] |
03:27 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1150.eqiad.wmnet with reason: Maintenance |
[production] |
03:27 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1150.eqiad.wmnet with reason: Maintenance |
[production] |
03:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3315 (T300775)', diff saved to https://phabricator.wikimedia.org/P20616 and previous config saved to /var/cache/conftool/dbconfig/20220212-032710-marostegui.json |
[production] |
03:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3315', diff saved to https://phabricator.wikimedia.org/P20615 and previous config saved to /var/cache/conftool/dbconfig/20220212-031205-marostegui.json |
[production] |
02:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3315', diff saved to https://phabricator.wikimedia.org/P20614 and previous config saved to /var/cache/conftool/dbconfig/20220212-025700-marostegui.json |
[production] |
02:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1113:3315 (T300775)', diff saved to https://phabricator.wikimedia.org/P20613 and previous config saved to /var/cache/conftool/dbconfig/20220212-024155-marostegui.json |
[production] |
2022-02-11
§
|
23:23 |
<inflatador> |
puppet-merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/762006 |
[production] |
22:47 |
<dzahn@deploy1002> |
helmfile [staging] DONE helmfile.d/services/miscweb: sync on main |
[production] |
22:36 |
<dzahn@deploy1002> |
helmfile [staging] START helmfile.d/services/miscweb: apply on main |
[production] |
22:30 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
22:29 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
22:29 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
22:28 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
22:20 |
<dzahn@deploy1002> |
helmfile [staging] DONE helmfile.d/services/miscweb: sync on main |
[production] |
22:09 |
<dzahn@deploy1002> |
helmfile [staging] START helmfile.d/services/miscweb: apply on main |
[production] |
21:47 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
21:46 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
21:46 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn |
[production] |
21:45 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn |
[production] |
19:41 |
<tzatziki> |
removed 16 emails from accounts with deleteUserEmail.php |
[production] |
19:13 |
<mutante> |
running puppet on all ores machines to install aspell-hi (gerrit:761974) which for some reason was installed on a random subset of ores servers (1002,2001,2005 but not the other 19 ones) T300195 T252581 - after this the package is now installed on 18 servers (1001-1009, 2001-2009) |
[production] |
16:54 |
<hnowlan@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: sync on production |
[production] |
16:54 |
<hnowlan@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: sync on staging |
[production] |
16:54 |
<hnowlan@deploy1002> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: sync on production |
[production] |
16:53 |
<hnowlan@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: sync on production |
[production] |
16:53 |
<hnowlan@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: sync on staging |
[production] |
16:53 |
<hnowlan@deploy1002> |
helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: sync on production |
[production] |
16:32 |
<btullis@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host datahubsearch1001.eqiad.wmnet |
[production] |
16:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1113:3315 (T300775)', diff saved to https://phabricator.wikimedia.org/P20611 and previous config saved to /var/cache/conftool/dbconfig/20220211-161324-marostegui.json |
[production] |
16:13 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1113.eqiad.wmnet with reason: Maintenance |
[production] |
16:13 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1113.eqiad.wmnet with reason: Maintenance |
[production] |
16:03 |
<btullis@cumin1001> |
START - Cookbook sre.ganeti.makevm for new host datahubsearch1001.eqiad.wmnet |
[production] |
14:23 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts auth2001.codfw.wmnet |
[production] |
14:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1098:3316 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20610 and previous config saved to /var/cache/conftool/dbconfig/20220211-142045-root.json |
[production] |
14:07 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.decommission for hosts auth2001.codfw.wmnet |
[production] |
14:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1098:3316 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20609 and previous config saved to /var/cache/conftool/dbconfig/20220211-140540-root.json |
[production] |
13:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1098:3316 (re)pooling @ 50%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20608 and previous config saved to /var/cache/conftool/dbconfig/20220211-135037-root.json |
[production] |
13:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1098:3316 (re)pooling @ 25%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20607 and previous config saved to /var/cache/conftool/dbconfig/20220211-133533-root.json |
[production] |
13:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1098:3316 (re)pooling @ 10%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20606 and previous config saved to /var/cache/conftool/dbconfig/20220211-132028-root.json |
[production] |
13:19 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host ganeti1011.eqiad.wmnet with OS buster |
[production] |
13:18 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
13:18 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
13:17 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
13:17 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |
13:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1098:3316 (T300662)', diff saved to https://phabricator.wikimedia.org/P20605 and previous config saved to /var/cache/conftool/dbconfig/20220211-131507-marostegui.json |
[production] |