2019-11-18
§
|
09:03 |
<marostegui> |
Restart MySQL on db1124 and db1125 to apply new replication filters T238370 |
[production] |
07:17 |
<marostegui> |
Upgrade and restart mysql on sanitarium hosts on codfw to pick up new replication filters: db2094 and db2095 - T238370 |
[production] |
07:09 |
<marostegui> |
Stop MySQL on db2070 to clone db2135 - T238183 |
[production] |
06:52 |
<vgutierrez> |
Move cp1083 from nginx to ats-tls - T231627 |
[production] |
06:32 |
<vgutierrez> |
Move cp1081 from nginx to ats-tls - T231627 |
[production] |
06:30 |
<marostegui> |
Restart tendril mysql - T231769 |
[production] |
06:12 |
<vgutierrez> |
Move cp2012 from nginx to ats-tls - T231627 |
[production] |
06:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1096:3316 for compression', diff saved to https://phabricator.wikimedia.org/P9652 and previous config saved to /var/cache/conftool/dbconfig/20191118-060508-marostegui.json |
[production] |
06:02 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1105:3312 for compression', diff saved to https://phabricator.wikimedia.org/P9651 and previous config saved to /var/cache/conftool/dbconfig/20191118-060207-marostegui.json |
[production] |
06:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db2072, db2088:3311, db2087:3316, db2086:3317 after maintenances and schema changes', diff saved to https://phabricator.wikimedia.org/P9650 and previous config saved to /var/cache/conftool/dbconfig/20191118-060114-marostegui.json |
[production] |
05:53 |
<marostegui> |
Deploy schema change on s5 primary master db1100 - T233135 T234066 |
[production] |
03:40 |
<vgutierrez> |
Move cp2007 from nginx to ats-tls - T231627 |
[production] |
00:44 |
<tstarling@deploy1001> |
Synchronized php-1.35.0-wmf.5/includes/Rest/Handler/PageHistoryCountHandler.php: fix extremely slow query T238378 (duration: 00m 59s) |
[production] |
2019-11-15
§
|
22:14 |
<jeh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
22:12 |
<jeh@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
21:54 |
<jeh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
21:52 |
<jeh@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
21:31 |
<jeh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
21:29 |
<jeh@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
21:21 |
<_joe_> |
disabling proxying to ws on phabricator1003 |
[production] |
20:04 |
<XioNoX> |
push pfw policies to pfw3-eqiad - T238368 |
[production] |
20:02 |
<XioNoX> |
push pfw policies to pfw3-codfw - T238368 |
[production] |
19:07 |
<XioNoX> |
remove vlan 1 trunking between msw1-codfw and mr1-codfw, will cause a quick connectivity issue - T228112 |
[production] |
18:07 |
<XioNoX> |
homer push on management switches |
[production] |
17:30 |
<mutante> |
phabricator - -started phd service |
[production] |
17:11 |
<XioNoX> |
homer push to management routers (https://gerrit.wikimedia.org/r/550576) |
[production] |
16:43 |
<hashar> |
Restored zuul-merger / CI for operations/puppet.git |
[production] |
16:29 |
<hashar> |
CI slowed down due to a huge spike of internal jobs. Being flushed as of now # T140297 |
[production] |
16:25 |
<bblack> |
repool cp2001 |
[production] |
16:08 |
<bblack> |
depool cp2001 for experiments |
[production] |
16:02 |
<moritzm> |
rebooting rpki1001 to rectify microcode loading |
[production] |
16:00 |
<jmm@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
16:00 |
<jmm@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
15:51 |
<ejegg> |
updated Fundraising CiviCRM from ae9b3819cd to c05c302e54 |
[production] |
15:36 |
<ejegg> |
reduced batch size of CiviCRM contact deduplication jobs |
[production] |
15:11 |
<ema> |
pool cp3064 with ATS backend T227432 |
[production] |
15:07 |
<ema> |
reboot cp3064 after reimage |
[production] |
14:51 |
<ema@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
14:49 |
<ema@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
14:25 |
<ema> |
depool cp3064 and reimage as text_ats T227432 |
[production] |
14:17 |
<godog> |
SIGHUP prometheus@ops on prometheus1004 |
[production] |
14:13 |
<bblack> |
lvs1013 - pybal restart for new config |
[production] |
14:13 |
<bblack> |
lvs2001 - pybal restart for new config |
[production] |
14:13 |
<bblack> |
lvs5001 - pybal restart for new config |
[production] |
14:13 |
<bblack> |
lvs4005 - pybal restart for new config |
[production] |
14:12 |
<bblack> |
lvs3005 - pybal restart for new config |
[production] |