1751-1800 of 10000 results (65ms)
2019-04-13 §
18:25 <bd808> Restarted nova-compute service on cloudvirt1015 (T220853) [admin]
18:23 <bd808> Attempting to reboot puppet-lta via OpenStack cli (take 2) [lta-tracker]
18:23 <bd808> nova reset-state --active 8b764eb9-2dca-4902-a9c5-ed54fa3fc57d [lta-tracker]
18:21 <bd808> Attempting to reboot puppet-lta via OpenStack cli [lta-tracker]
18:16 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/503664 [releng]
17:44 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/502807 (postgres php72) [releng]
15:58 <ebernhardson> restart elasticsearch on elastic1027 [production]
15:34 <shdubsh> restart recommendation_api on scb1001 [production]
15:33 <shdubsh> restart recommendation_api on scb2001 [production]
10:46 <onimisionipe> depooling maps2001 for postgres init [production]
09:21 <elukey> re-run failed webrequest-text 2018-04-13-07 job - temporary failure between Hive and HDFS [analytics]
08:05 <gehel> repooling wdqs1008 - data transfer completed - T220830 [production]
00:32 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.25/includes/: Idc19cc29764a / T220854 - hot fix (duration: 05m 37s) [production]
00:06 <Krenair> transferred /home from deployment-cache-upload04 to deployment-cache-upload05 and shut down old one [releng]
00:06 <Krenair> transferred /home from deployment-cumin to deployment-cumin02 and shut down old one [releng]
00:06 <Joan> Restarted CVNBot18 (Last message was received on RCReader 17276.533177 seconds ago). [cvn]
2019-04-12 §
23:34 <mutante> - toolsbeta-k8s-master-01 - was out of disk space on / , puppet failed to run because out of disk, rename existing syslog.1.gz, gzip syslog.1, rename existing daemon.log.1.gz, gzip daemong.log.1 [toolsbeta]
21:16 <Krinkle> scap was unable to sync to 1 apache (connect to host cloudweb2001-dev.wikimedia.org port 22: Connection timed out) [production]
21:10 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.25/extensions/ImageMap/includes/ImageMap.php: I0ee84f059da / T217087 (duration: 05m 12s) [production]
19:27 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) [production]
19:27 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
19:24 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) [production]
19:24 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
18:59 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
18:59 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
17:17 <onimisionipe> depooling maps2002 for postgres init [production]
17:16 <onimisionipe> repooling maps2001 - postgres init is complete [production]
17:09 <Lucas_WMDE> wikidata-shex for i in {1..36}; do printf 'Schema:O%d\n' "$i"; done | mwscript purgePage.php --skip-exists-check # purge Schema pages after config change [wikidata-dev]
17:08 <Lucas_WMDE> wikidata-shex add packages/shex-webapp/ in wgWBSchemaShExSimpleUrl [wikidata-dev]
17:06 <Lucas_WMDE> wikidata-shex for i in {1..36}; do printf 'Schema:O%d\n' "$i"; done | mwscript purgePage.php --skip-exists-check # purge Schema pages after config change (T218886) [wikidata-dev]
17:05 <Lucas_WMDE> wikidata-shex add hideData and textMapIsSparqlQuery to wgWBSchemaShExSimpleUrl (T218886) [wikidata-dev]
16:14 <elukey> install ifstat on all the mc1* hosts for network bandwidth investigation [production]
15:56 <gehel> starting data trasnfer from wdqs1008 to wdqs1009 - T220830 [production]
15:32 <thcipriani> gerrit back [production]
15:29 <thcipriani> gerrit restart incoming [production]
15:09 <Krenair> upload traffic now through cache-upload05 [releng]
14:29 <onimisionipe> depool maps2001 for postgres initialization [production]
13:24 <akosiaris> re-enable puppet across the fleet. Patch merged, recovery storm coming [production]
13:18 <akosiaris> disable puppet across the fleet to avoid incoming puppet alert storm [production]
12:57 <marostegui> Purge old rows and optimize tables on spare host pc1010 T210725 [production]
12:53 <urandom> decommissioning cassandra-c, restbase2008 -- T208087 [production]
12:49 <gehel> rolling restart of cassandra on maps* for jvm upgrade [production]
12:22 <arturo> T220095 disable icinga checks for labtestcontrol2003 [production]
12:16 <gilles@deploy1001> Synchronized wmf-config/InitialiseSettings.php: T220807 Reduce cawiki survey sampling rate (duration: 05m 11s) [production]
11:56 <moritzm> upgrading app server canaries to version 1.8.1 of the PHP wikidiff extension (HHVM already deployed) T203069 [production]
11:46 <moritzm> upgrading acmechief hosts to latest buster state [production]
11:44 <gilles@deploy1001> Synchronized wmf-config/InitialiseSettings.php: T220807 Oversample navtiming on cawiki and commonswiki (duration: 05m 14s) [production]
11:37 <Trey314159> reindexing Greek, Turkish, and Irish wikis on elastic@eqiad and elastic@codfw complete (T217806) [production]
11:19 <moritzm> installed Java security updates on relforge* hosts [production]
11:10 <moritzm> installing Java security updates on remaining maps hosts [production]