4351-4400 of 10000 results (62ms)
2019-02-21 §
04:16 <XioNoX> Unplug Tata/NTT/PCCW from cr1-eqsin - T213121 [production]
03:21 <XioNoX> replace cp5010 disk 1 - T214274 [production]
03:15 <kart_> Fifth manual run of unpublished draft purge script for ContentTranslation (T216470) [production]
02:44 <XioNoX> depool eqsin - T213121 [production]
02:31 <twentyafterfour> phabricator upgrade finished, service appears to be returned to normal [production]
01:43 <twentyafterfour> running phabricator database schema changes [production]
01:38 <twentyafterfour> now taking phabricator offline for upgrade [production]
01:15 <twentyafterfour> Taking phabricator offline momentarily for upgrade [production]
01:01 <twentyafterfour> set downtime in icinga for phab100* [production]
00:17 <catrope@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Enable partial blocks on metawiki and mediawikiwiki (T216065) (duration: 00m 54s) [production]
2019-02-20 §
23:59 <ppchelko@deploy1001> Finished deploy [changeprop/deploy@5e4486a]: Purge varnish on revision restrictions (duration: 01m 23s) [production]
23:57 <ppchelko@deploy1001> Started deploy [changeprop/deploy@5e4486a]: Purge varnish on revision restrictions [production]
21:48 <eileen> civicrm revision changed from 165fbf5894 to 1b5d974569, config revision is ccefa3716b [production]
21:46 <arlolra> Updated Parsoid to 9b204a0 (T153080, T169975, T215824) [production]
21:28 <arlolra@deploy1001> Finished deploy [parsoid/deploy@c4574d1]: Updating Parsoid to 9b204a0 (duration: 09m 33s) [production]
21:19 <arlolra@deploy1001> Started deploy [parsoid/deploy@c4574d1]: Updating Parsoid to 9b204a0 [production]
21:08 <_joe_> rolling restart of php-fpm to catch up with the tideways change [production]
20:35 <thcipriani@deploy1001> Synchronized php: group1 wikis to 1.33.0-wmf.18 (duration: 00m 53s) [production]
20:33 <thcipriani@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.33.0-wmf.18 [production]
20:14 <thcipriani@deploy1001> Synchronized php-1.33.0-wmf.18/extensions/EventBus/includes/EventBusRCFeedEngine.php: [[gerrit:491845|Check for eventServiceName in config before accessing]] T216561 (duration: 00m 55s) [production]
18:30 <fdans@deploy1001> Finished deploy [analytics/refinery@ccf837e]: deploying refinery for new wikis and changes in scripts (duration: 11m 13s) [production]
18:24 <mobrovac@deploy1001> Finished deploy [restbase/deploy@80f518c]: Remove VE request logging - T215956 (duration: 20m 19s) [production]
18:19 <fdans@deploy1001> Started deploy [analytics/refinery@ccf837e]: deploying refinery for new wikis and changes in scripts [production]
18:04 <mobrovac@deploy1001> Started deploy [restbase/deploy@80f518c]: Remove VE request logging - T215956 [production]
17:22 <sbisson@deploy1001> Synchronized php-1.33.0-wmf.18/extensions/Flow/modules/mw.flow.Initializer.js: SWAT: [[gerrit:491744|Unbreak reply clicks with existing widget]] (duration: 00m 58s) [production]
17:08 <hashar> contint1001: fix broken root ownership on zuul git deploy repo: sudo find /etc/zuul/wikimedia/.git -not -user zuul -exec chown zuul:zuul {} + [production]
16:49 <herron> migrating es shards away from logstash100[56] with "cluster.routing.allocation.exclude._name" : "logstash1005-production-logstash-eqiad,logstash1006-production-logstash-eqiad” T214608 [production]
16:40 <twentyafterfour> started phd again, seems to be working now without killing the db [production]
16:38 <bblack> multatuli: upgrade gdnsd to 3.0.0-1~wmf1 [production]
16:36 <godog> depool and reimage logstash1008 with stretch - T213898 [production]
16:26 <twentyafterfour> stopped phd on phab1001 and scheduled downtime in icinga [production]
16:24 <bblack> authdns1001: upgrade gdnsd to 3.0.0-1~wmf1 [production]
16:19 <twentyafterfour> stopped phd on phab1002 [production]
16:03 <ottomata> removing spark 1 from Analytics cluster - T212134 [production]
15:55 <bblack> authdns2001: upgrade gdnsd to 3.0.0-1~wmf1 [production]
15:37 <fsero> restarting docker-registry service on systemd [production]
15:35 <moritzm> temporarily stop prometheus instances on prometheus1004 for systemd upgrade/journald restart [production]
14:43 <gehel@cumin2001> END (FAIL) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=99) [production]
14:35 <gehel@cumin2001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
14:35 <volans> upgraded spicerack to 0.0.18 on cumin[12]001 [production]
14:34 <volans> uploaded spicerack_0.0.18-1_amd64.deb to apt.wikimedia.org stretch-wikimedia [production]
14:00 <gehel@cumin2001> END (ERROR) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=97) [production]
14:00 <gehel@cumin2001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
13:59 <gehel> rolling upgrade of elasticsearch / cirrus / codfw to 5.6.14 - T215931 [production]
13:51 <godog> prometheus on prometheus2004 crashed/exited after journald upgrade -- starting up again now [production]
13:00 <jbond42> rolling restarts for hhvm in eqiad [production]
12:28 <volans> upgraded spicerack to 0.0.17 on cumin[12]001 [production]
12:25 <volans> uploaded spicerack_0.0.17-1_amd64.deb to apt.wikimedia.org stretch-wikimedia [production]
12:08 <moritzm> restarted ircecho on kraz.wikimedia.org [production]
11:46 <jbond42> rolling restarts for hhvm in codfw [production]