2019-03-07
§
|
11:28 |
<gtirloni> |
updated seaborgium to stretch (T217280) |
[production] |
11:21 |
<mutante> |
doc.wikimedia.org - back up, manually fixed path to php-fpm.sock to 7.0 - puppet disabled, fix coming |
[production] |
11:18 |
<mutante> |
doc.wikimedia.org down and being worked on - package downgrade exposed an issue |
[production] |
11:15 |
<marostegui> |
Stop MySQL on db1075 for upgrade |
[production] |
11:15 |
<mutante> |
doc1001 - apt-get remove --purge php7.2* (the same packages with 7.0 were previosly installed in parallel) |
[production] |
10:58 |
<gtirloni> |
upgrading seaborgium to Stretch (so it's running the same distro as serpens/codfw) |
[production] |
10:34 |
<moritzm> |
restarting HHVM/Apache on mediawiki canaries to pick up OpenSSL security update |
[production] |
10:14 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1075 for schema change and mysql upgrade (duration: 00m 56s) |
[production] |
10:13 |
<moritzm> |
upgrading mediawiki canaries to component/php72 (T216712) |
[production] |
09:47 |
<moritzm> |
upgrading mwdebug servers in eqiad to component/php72 (T216712) |
[production] |
09:37 |
<akosiaris@cumin1001> |
conftool action : set/pooled=no; selector: dc=codfw,service=citoid,cluster=scb,name=scb.* |
[production] |
09:37 |
<akosiaris> |
rump up traffic to citoid kubernetes to 100% |
[production] |
09:37 |
<akosiaris@cumin1001> |
conftool action : set/pooled=no; selector: dc=eqiad,service=citoid,cluster=scb,name=scb.* |
[production] |
09:21 |
<moritzm> |
upgrading mwdebug servers in codfw to component/php72 (T216712) |
[production] |
09:15 |
<elukey> |
fixed vlan-analytics1-d-eqiad members on asw2-d-eqiad - T205507 |
[production] |
09:03 |
<mutante> |
mw2151 - mkdir /var/run/nutcracker ; chown nutcracker:nutcracker /var/run/nutcracker ; systemctl start nutcracker - runs again - pooling server |
[production] |
08:57 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Fully repool db1122 (duration: 00m 55s) |
[production] |
08:54 |
<mutante> |
depooled mw2151 - nutcracker failing |
[production] |
08:19 |
<mutante> |
reloading icinga service |
[production] |
08:10 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: More traffic to db1122 (duration: 00m 55s) |
[production] |
07:47 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: repool db1122 into API (duration: 00m 55s) |
[production] |
07:30 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1122 (duration: 00m 55s) |
[production] |
07:28 |
<marostegui@deploy1001> |
sync-file aborted: Repool db1121 (duration: 00m 01s) |
[production] |
07:21 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1121 (duration: 00m 56s) |
[production] |
07:12 |
<marostegui> |
Stop MySQL on db1122 to upgradwe |
[production] |
07:12 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1122 for MySQL upgrade (duration: 00m 57s) |
[production] |
06:40 |
<kart_> |
Finished manual run of unpublished ContentTranslation draft purge script (T217310) |
[production] |
06:03 |
<marostegui> |
Deploy schema change on db1121, this will generate lag on labsdb:s4 - T86342 |
[production] |
06:03 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1121 (duration: 00m 57s) |
[production] |
04:03 |
<kart_> |
Started manual run of unpublished ContentTranslation draft purge script (T217310) |
[production] |
01:19 |
<twentyafterfour> |
phabricator update complete |
[production] |
01:17 |
<twentyafterfour> |
starting phabricator update to tag release/2019-03-07/1 - expect momentary downtime |
[production] |
01:10 |
<twentyafterfour> |
preparing phabricator upgrade |
[production] |
00:47 |
<aaron@deploy1001> |
Synchronized php-1.33.0-wmf.20/includes/specials/pagers/ActiveUsersPager.php: f929e2a5069 (duration: 00m 56s) |
[production] |
00:43 |
<aaron@deploy1001> |
Synchronized php-1.33.0-wmf.20/includes/specials/SpecialActiveusers.php: f929e2a5069 (duration: 00m 56s) |
[production] |
00:28 |
<aaron@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Enable loading WikibaseCirrusSearch (disabled) on production wikis (duration: 00m 55s) |
[production] |
00:23 |
<aaron@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Run WikibaseCirrusSearch code for search on testwikidatawiki (duration: 00m 56s) |
[production] |
2019-03-06
§
|
21:23 |
<XioNoX> |
test ping-offload with unused IP 208.80.153.225 - T190090 |
[production] |
20:30 |
<hashar> |
1.33.0-wmf.20 looks fine with group0 and group1 |
[production] |
20:14 |
<hashar@deploy1001> |
Synchronized php: group1 wikis to 1.33.0-wmf.20 (duration: 01m 43s) |
[production] |
20:12 |
<hashar@deploy1001> |
rebuilt and synchronized wikiversions files: group1 wikis to 1.33.0-wmf.20 |
[production] |
19:51 |
<hashar@deploy1001> |
Synchronized php-1.33.0-wmf.20/extensions/LdapAuthentication/LdapPrimaryAuthenticationProvider.php: Remove calls to no-longer-imeplemented methods after I2eeaeed1 - T217692 (duration: 00m 58s) |
[production] |
19:14 |
<XioNoX> |
apply ping-offload redirect to private1-a-codfw - T190090 |
[production] |
19:03 |
<gtirloni> |
increased serpens vCPUs from 4 to 8 (T217280) |
[production] |
18:55 |
<gtirloni> |
increased seaborgium vCPUs from 4 to 8 (T217280) |
[production] |
18:08 |
<bstorm_> |
re-enabled puppet after observing the change works well on the partner for labstore2004 and T210818 |
[production] |
18:07 |
<joal@deploy1001> |
Finished deploy [analytics/refinery@fef9181]: Regular analytics weekly deploy train (duration: 31m 02s) |
[production] |
18:04 |
<bstorm_> |
disabled puppet and downtimed labstore2004 while deploying a change for T210818 |
[production] |
17:36 |
<joal@deploy1001> |
Started deploy [analytics/refinery@fef9181]: Regular analytics weekly deploy train |
[production] |
17:34 |
<sbisson@deploy1001> |
Synchronized wmf-config/throttle.php: SWAT: [[gerrit:494782|Added new throttle rules, removed expired]] (duration: 00m 55s) |
[production] |