2020-08-13
ยง
|
14:45 |
<ema> |
repool mw1382 with kernel memory accounting disabled T260281 |
[production] |
14:45 |
<fdans@deploy1001> |
Started deploy [analytics/refinery@ba1a439]: Regular analytics weekly train |
[production] |
14:41 |
<oblivian@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
14:40 |
<oblivian@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
14:38 |
<ema> |
reboot mw1382 with kernel memory accounting disabled T260281 |
[production] |
14:34 |
<oblivian@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
14:34 |
<_joe_> |
rebooting mw1381 with a newer kernel, mw1383 as control with the old kernel T260329 |
[production] |
14:33 |
<oblivian@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
14:31 |
<_joe_> |
installing kernel 4.19.0-0.bpo.9 on mw1381 T260329 |
[production] |
14:05 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) |
[production] |
14:00 |
<elukey> |
create schema[12]00[34] in ganeti - T260347 |
[production] |
13:59 |
<elukey@cumin1001> |
START - Cookbook sre.ganeti.makevm |
[production] |
13:58 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) |
[production] |
13:53 |
<elukey@cumin1001> |
START - Cookbook sre.ganeti.makevm |
[production] |
13:51 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) |
[production] |
13:46 |
<elukey@cumin1001> |
START - Cookbook sre.ganeti.makevm |
[production] |
13:45 |
<hnowlan> |
moving api-gateway service to monitoring_setup |
[production] |
13:44 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) |
[production] |
13:44 |
<hashar> |
Gracefully restarting Zuul |
[production] |
13:39 |
<elukey@cumin1001> |
START - Cookbook sre.ganeti.makevm |
[production] |
13:10 |
<_joe_> |
forcing a puppet run on the api appservers in eqiad T260329 |
[production] |
13:07 |
<oblivian@deploy1001> |
Synchronized wmf-config/CommonSettings.php: revert enabling of lilypond (again) T257091 T260329 (duration: 00m 59s) |
[production] |
11:24 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) |
[production] |
11:20 |
<jayme@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) |
[production] |
11:09 |
<hnowlan> |
restarting pybal on lvs2010 T254908 |
[production] |
11:09 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
11:06 |
<hnowlan> |
restarting pybal on lvs2009 T254908 |
[production] |
11:05 |
<hnowlan> |
restarting pybal on lvs1016 T254908 |
[production] |
11:05 |
<jayme> |
depool mw1380 for downgrade of poppler-utils,libpoppler-glib8,libpoppler64,curl,libcurl3,libcurl3-gnutls,libpython3.5,python3.5,libpython3.5-stdlib,python3.5-minimal,libpython3.5-minimal,imagemagick-6-common,libmagickcore-6.q16-3,libmagickwand-6.q16-3,imagemagick-6.q16,imagemagick,e2fslibs,e2fsprogs,libcomerr2,libss2 and reboot - T260329 |
[production] |
11:05 |
<hnowlan> |
restarting pybal on lvs1015 T254908 |
[production] |
11:04 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.reboot-single |
[production] |
10:42 |
<hnowlan> |
Moving api-gateway service to from service_setup to lvs_setup and running puppet on LVS servers |
[production] |
10:17 |
<jayme> |
depool mw1379 for downgrade of poppler-utils,libpoppler-glib8,libpoppler64,curl,libcurl3,libcurl3-gnutls,libpython3.5,python3.5,libpython3.5-stdlib,python3.5-minimal,libpython3.5-minimal,imagemagick-6-common,libmagickcore-6.q16-3,libmagickwand-6.q16-3,imagemagick-6.q16,imagemagick,e2fslibs,e2fsprogs,libcomerr2,libss2 and reboot - T260329 |
[production] |
10:04 |
<XioNoX> |
re-order OSPF interfaces on all routers (now partially netbox driven) |
[production] |
09:37 |
<ayounsi@deploy1001> |
Finished deploy [homer/deploy@89636df]: Homer release v0.2.5 (duration: 03m 03s) |
[production] |
09:34 |
<ayounsi@deploy1001> |
Started deploy [homer/deploy@89636df]: Homer release v0.2.5 |
[production] |
08:59 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.network.prepare-upgrade (exit_code=0) |
[production] |
08:58 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.network.prepare-upgrade (exit_code=0) |
[production] |
08:55 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1082', diff saved to https://phabricator.wikimedia.org/P12247 and previous config saved to /var/cache/conftool/dbconfig/20200813-085547-marostegui.json |
[production] |
08:45 |
<_joe_> |
downgrading imagemagick on mw1378 T260329 |
[production] |
08:43 |
<_joe_> |
downgrading imagemagick on mw1378 T260281 |
[production] |
08:38 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.prepare-upgrade |
[production] |
08:38 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.prepare-upgrade |
[production] |
07:55 |
<_joe_> |
downgrading curl/libcurl3/libcurl3-gnutls on mw1377 T260329 |
[production] |
07:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1082', diff saved to https://phabricator.wikimedia.org/P12246 and previous config saved to /var/cache/conftool/dbconfig/20200813-074528-marostegui.json |
[production] |
07:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repool db1082', diff saved to https://phabricator.wikimedia.org/P12244 and previous config saved to /var/cache/conftool/dbconfig/20200813-071943-marostegui.json |
[production] |
07:16 |
<marostegui> |
Stop replication on db1082 to remove triggers on sanitarium for MCR changs |
[production] |
07:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1082', diff saved to https://phabricator.wikimedia.org/P12243 and previous config saved to /var/cache/conftool/dbconfig/20200813-071545-marostegui.json |
[production] |
06:48 |
<marostegui> |
Deploy MCR change on dbstore1003:3311 |
[production] |
06:02 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |