3801-3850 of 10000 results (85ms)
2019-04-14 §
07:49 <Cam11598> 12:49:04 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-wikidata. [cvn]
07:40 <Cam11598> 12:40:10 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-meta. [cvn]
07:40 <Cam11598> 12:38:56 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-wp-en. [cvn]
06:10 <ebernhardson> unban elastic1027 from eqiad-psi [production]
05:36 <ebernhardson> unbanning elastic1027 after about half the shards left and load dropped [production]
05:31 <ebernhardson> ban elastic1027 from elasticsearch-psi in eqiad [production]
04:59 <ebernhardson> restart elasticsearch_6@production-searhc-psi-eqiad on elastic1027 due to 100% cpu for last 30+ minutes [production]
2019-04-13 §
21:08 <bstorm_> Moving tools-prometheus-01 to cloudvirt1009 and tools-clushmaster-02 to cloudvirt1008 for T220853 [tools]
21:05 <Krinkle> Deleting a bunch of job config+history from Jenkins for jobs that no longer exist in JJB/Zuul. T91410 [releng]
21:00 <Krinkle> Deleting a bunch of job config+history from Jenkins for jobs that no longer exist in JJB/Zuul. [releng]
21:00 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/503669 [releng]
20:43 <bstorm_> Moving puppet-lta to a new server because of hardware problems T220853 [lta-tracker]
20:36 <bstorm_> moving tools-elastic-02 to cloudvirt1009 for T220853 [tools]
20:28 <bstorm_> migrated product-analytics-test to cloudvirt1009 for T220853 [discovery-stats]
20:17 <bstorm_> migrated product-analytics-bayes to cloudvirt1009 for T220853 [discovery-stats]
19:58 <bstorm_> started migrating tools-k8s-etcd-03 to cloudvirt1012 T220853 [tools]
19:51 <bstorm_> started migrating tools-flannel-etcd-02 to cloudvirt1013 T220853 [tools]
18:58 <Joan> Restarted CVNBot18 (Last message was received on RCReader 38511.495035 seconds ago). [cvn]
18:52 <Krinkle> "Your JENKINS_HOME (/var/lib/jenkins) is almost full. " [releng]
18:46 <godog> 3h downtime for cloudvirt1015 [production]
18:25 <bd808> Restarted nova-compute service on cloudvirt1015 (T220853) [admin]
18:23 <bd808> Attempting to reboot puppet-lta via OpenStack cli (take 2) [lta-tracker]
18:23 <bd808> nova reset-state --active 8b764eb9-2dca-4902-a9c5-ed54fa3fc57d [lta-tracker]
18:21 <bd808> Attempting to reboot puppet-lta via OpenStack cli [lta-tracker]
18:16 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/503664 [releng]
17:44 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/502807 (postgres php72) [releng]
15:58 <ebernhardson> restart elasticsearch on elastic1027 [production]
15:34 <shdubsh> restart recommendation_api on scb1001 [production]
15:33 <shdubsh> restart recommendation_api on scb2001 [production]
10:46 <onimisionipe> depooling maps2001 for postgres init [production]
09:21 <elukey> re-run failed webrequest-text 2018-04-13-07 job - temporary failure between Hive and HDFS [analytics]
08:05 <gehel> repooling wdqs1008 - data transfer completed - T220830 [production]
00:32 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.25/includes/: Idc19cc29764a / T220854 - hot fix (duration: 05m 37s) [production]
00:06 <Krenair> transferred /home from deployment-cache-upload04 to deployment-cache-upload05 and shut down old one [releng]
00:06 <Krenair> transferred /home from deployment-cumin to deployment-cumin02 and shut down old one [releng]
00:06 <Joan> Restarted CVNBot18 (Last message was received on RCReader 17276.533177 seconds ago). [cvn]
2019-04-12 §
23:34 <mutante> - toolsbeta-k8s-master-01 - was out of disk space on / , puppet failed to run because out of disk, rename existing syslog.1.gz, gzip syslog.1, rename existing daemon.log.1.gz, gzip daemong.log.1 [toolsbeta]
21:16 <Krinkle> scap was unable to sync to 1 apache (connect to host cloudweb2001-dev.wikimedia.org port 22: Connection timed out) [production]
21:10 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.25/extensions/ImageMap/includes/ImageMap.php: I0ee84f059da / T217087 (duration: 05m 12s) [production]
19:27 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) [production]
19:27 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
19:24 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) [production]
19:24 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
18:59 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
18:59 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission [production]
17:17 <onimisionipe> depooling maps2002 for postgres init [production]
17:16 <onimisionipe> repooling maps2001 - postgres init is complete [production]
17:09 <Lucas_WMDE> wikidata-shex for i in {1..36}; do printf 'Schema:O%d\n' "$i"; done | mwscript purgePage.php --skip-exists-check # purge Schema pages after config change [wikidata-dev]
17:08 <Lucas_WMDE> wikidata-shex add packages/shex-webapp/ in wgWBSchemaShExSimpleUrl [wikidata-dev]
17:06 <Lucas_WMDE> wikidata-shex for i in {1..36}; do printf 'Schema:O%d\n' "$i"; done | mwscript purgePage.php --skip-exists-check # purge Schema pages after config change (T218886) [wikidata-dev]