2019-02-20
§
|
20:14 |
<thcipriani@deploy1001> |
Synchronized php-1.33.0-wmf.18/extensions/EventBus/includes/EventBusRCFeedEngine.php: [[gerrit:491845|Check for eventServiceName in config before accessing]] T216561 (duration: 00m 55s) |
[production] |
18:30 |
<fdans@deploy1001> |
Finished deploy [analytics/refinery@ccf837e]: deploying refinery for new wikis and changes in scripts (duration: 11m 13s) |
[production] |
18:24 |
<mobrovac@deploy1001> |
Finished deploy [restbase/deploy@80f518c]: Remove VE request logging - T215956 (duration: 20m 19s) |
[production] |
18:19 |
<fdans@deploy1001> |
Started deploy [analytics/refinery@ccf837e]: deploying refinery for new wikis and changes in scripts |
[production] |
18:04 |
<mobrovac@deploy1001> |
Started deploy [restbase/deploy@80f518c]: Remove VE request logging - T215956 |
[production] |
17:22 |
<sbisson@deploy1001> |
Synchronized php-1.33.0-wmf.18/extensions/Flow/modules/mw.flow.Initializer.js: SWAT: [[gerrit:491744|Unbreak reply clicks with existing widget]] (duration: 00m 58s) |
[production] |
17:08 |
<hashar> |
contint1001: fix broken root ownership on zuul git deploy repo: sudo find /etc/zuul/wikimedia/.git -not -user zuul -exec chown zuul:zuul {} + |
[production] |
16:49 |
<herron> |
migrating es shards away from logstash100[56] with "cluster.routing.allocation.exclude._name" : "logstash1005-production-logstash-eqiad,logstash1006-production-logstash-eqiad” T214608 |
[production] |
16:40 |
<twentyafterfour> |
started phd again, seems to be working now without killing the db |
[production] |
16:38 |
<bblack> |
multatuli: upgrade gdnsd to 3.0.0-1~wmf1 |
[production] |
16:36 |
<godog> |
depool and reimage logstash1008 with stretch - T213898 |
[production] |
16:26 |
<twentyafterfour> |
stopped phd on phab1001 and scheduled downtime in icinga |
[production] |
16:24 |
<bblack> |
authdns1001: upgrade gdnsd to 3.0.0-1~wmf1 |
[production] |
16:19 |
<twentyafterfour> |
stopped phd on phab1002 |
[production] |
16:03 |
<ottomata> |
removing spark 1 from Analytics cluster - T212134 |
[production] |
15:55 |
<bblack> |
authdns2001: upgrade gdnsd to 3.0.0-1~wmf1 |
[production] |
15:37 |
<fsero> |
restarting docker-registry service on systemd |
[production] |
15:35 |
<moritzm> |
temporarily stop prometheus instances on prometheus1004 for systemd upgrade/journald restart |
[production] |
14:43 |
<gehel@cumin2001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) |
[production] |
14:35 |
<gehel@cumin2001> |
START - Cookbook sre.elasticsearch.rolling-upgrade |
[production] |
14:35 |
<volans> |
upgraded spicerack to 0.0.18 on cumin[12]001 |
[production] |
14:34 |
<volans> |
uploaded spicerack_0.0.18-1_amd64.deb to apt.wikimedia.org stretch-wikimedia |
[production] |
14:00 |
<gehel@cumin2001> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=97) |
[production] |
14:00 |
<gehel@cumin2001> |
START - Cookbook sre.elasticsearch.rolling-upgrade |
[production] |
13:59 |
<gehel> |
rolling upgrade of elasticsearch / cirrus / codfw to 5.6.14 - T215931 |
[production] |
13:51 |
<godog> |
prometheus on prometheus2004 crashed/exited after journald upgrade -- starting up again now |
[production] |
13:00 |
<jbond42> |
rolling restarts for hhvm in eqiad |
[production] |
12:28 |
<volans> |
upgraded spicerack to 0.0.17 on cumin[12]001 |
[production] |
12:25 |
<volans> |
uploaded spicerack_0.0.17-1_amd64.deb to apt.wikimedia.org stretch-wikimedia |
[production] |
12:08 |
<moritzm> |
restarted ircecho on kraz.wikimedia.org |
[production] |
11:46 |
<jbond42> |
rolling restarts for hhvm in codfw |
[production] |
11:28 |
<akosiaris> |
rebuild and re-upload rsyslog_8.38.0-1~bpo9+1wmf1_amd64.changes to apt.wikimedia.org/stretch-wikimedia to have mmkubernetes package |
[production] |
10:36 |
<marostegui> |
Deploy schema change on db1095:3313 - T210713 |
[production] |
10:04 |
<marostegui> |
Deploy schema change on dbstore1004:3313 - T210713 |
[production] |
09:57 |
<moritzm> |
installing systemd security updates on jessie hosts |
[production] |
09:33 |
<marostegui> |
Deploy schema change on db2043 (s3 codfw master), lag will be generated on s3 codfw - T210713 |
[production] |
09:06 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Fully repool db1109 (duration: 00m 52s) |
[production] |
08:48 |
<moritzm> |
powercycling rdb1001 for a test |
[production] |
07:45 |
<moritzm> |
installing gnupg2 updates on stretch |
[production] |
07:14 |
<marostegui> |
Deploy schema change on s1 primary master (db1067) - T210713 |
[production] |
07:13 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1080 T210713 (duration: 00m 52s) |
[production] |
07:09 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Increase traffic for db1109 after kernel upgrade (duration: 00m 52s) |
[production] |
06:54 |
<oblivian@deploy1001> |
Synchronized wmf-config/profiler.php: Fix the tideways setup (duration: 00m 52s) |
[production] |
06:50 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Increase traffic for db1109 after kernel upgrade (duration: 00m 52s) |
[production] |
06:47 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1080 T210713 (duration: 00m 51s) |
[production] |
06:44 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1119 T210713 (duration: 00m 51s) |
[production] |
06:38 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1109 after kernel upgrade (duration: 00m 52s) |
[production] |
06:28 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Slowly repool db1109 after kernel upgrade (duration: 00m 52s) |
[production] |
06:18 |
<marostegui> |
Stop MySQL on db1109 for kernel and mysql upgrade |
[production] |
06:18 |
<marostegui@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1109 for kernel and mysql upgrade (duration: 00m 52s) |
[production] |