7051-7100 of 10000 results (91ms)
2019-04-15 §
11:35 <Amir1> EU swat is done [production]
11:26 <moritzm> rolling restart of HHVM/Apache on labweb* to pick up OpenSSL update [production]
10:16 <Amir1> ores:8f01d40 going beta [releng]
09:58 <moritzm> installing openssl1.0 security updates [production]
09:18 <gehel> unbanning elastic1029 from cluster [production]
08:58 <moritzm> updating mediawiki servers in eqiad to version 1.8.1 of the PHP extension for wikidiff [production]
08:51 <hashar> castor: nuked /srv/jenkins-workspace/caches/castor-mw-ext-and-skins/master/mwselenium-quibble-docker # T220948 [releng]
08:29 <onimisionipe> increase wal_keep_segments on codfw maps master [production]
08:19 <moritzm> updating mediawiki servers in codfw to version 1.8.1 of the PHP extension for wikidiff [production]
08:12 <Amir1> f63fddf going staging [ores]
07:50 <Amir1> ladsgroup@mwmaint1002:~$ mwscript maintenance/initSiteStats.php --wiki=hywwiki --active (T220936) [production]
05:31 <marostegui> Upgrade db1100 [production]
05:07 <marostegui> powercycle mw1280 (crashed) [production]
2019-04-14 §
16:23 <andrewbogott> moved all tools-worker nodes off of cloudvirt1015 and uncordoned them [tools]
12:23 <lucaswerkmeister> kubectl delete pod quickcategories-654583560-xqip5 [tools.quickcategories]
08:07 <Cam11598> 1:06:50 AM <ChanServ> Flags +AV were set on rhinosf1 in #cvn-wp-en. [cvn]
07:52 <Cam11598> 12:51:25 AM <ChanServ> Flags +AV were set on BRPever in #cvn-wikidata. [cvn]
07:51 <Cam11598> 12:50:47 AM <ChanServ> Flags +AV were set on BRPever in #cvn-meta. [cvn]
07:49 <Cam11598> 12:49:04 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-wikidata. [cvn]
07:40 <Cam11598> 12:40:10 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-meta. [cvn]
07:40 <Cam11598> 12:38:56 AM <ChanServ> Flags +AV were set on Tulsi in #cvn-wp-en. [cvn]
06:10 <ebernhardson> unban elastic1027 from eqiad-psi [production]
05:36 <ebernhardson> unbanning elastic1027 after about half the shards left and load dropped [production]
05:31 <ebernhardson> ban elastic1027 from elasticsearch-psi in eqiad [production]
04:59 <ebernhardson> restart elasticsearch_6@production-searhc-psi-eqiad on elastic1027 due to 100% cpu for last 30+ minutes [production]
2019-04-13 §
21:08 <bstorm_> Moving tools-prometheus-01 to cloudvirt1009 and tools-clushmaster-02 to cloudvirt1008 for T220853 [tools]
21:05 <Krinkle> Deleting a bunch of job config+history from Jenkins for jobs that no longer exist in JJB/Zuul. T91410 [releng]
21:00 <Krinkle> Deleting a bunch of job config+history from Jenkins for jobs that no longer exist in JJB/Zuul. [releng]
21:00 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/503669 [releng]
20:43 <bstorm_> Moving puppet-lta to a new server because of hardware problems T220853 [lta-tracker]
20:36 <bstorm_> moving tools-elastic-02 to cloudvirt1009 for T220853 [tools]
20:28 <bstorm_> migrated product-analytics-test to cloudvirt1009 for T220853 [discovery-stats]
20:17 <bstorm_> migrated product-analytics-bayes to cloudvirt1009 for T220853 [discovery-stats]
19:58 <bstorm_> started migrating tools-k8s-etcd-03 to cloudvirt1012 T220853 [tools]
19:51 <bstorm_> started migrating tools-flannel-etcd-02 to cloudvirt1013 T220853 [tools]
18:58 <Joan> Restarted CVNBot18 (Last message was received on RCReader 38511.495035 seconds ago). [cvn]
18:52 <Krinkle> "Your JENKINS_HOME (/var/lib/jenkins) is almost full. " [releng]
18:46 <godog> 3h downtime for cloudvirt1015 [production]
18:25 <bd808> Restarted nova-compute service on cloudvirt1015 (T220853) [admin]
18:23 <bd808> Attempting to reboot puppet-lta via OpenStack cli (take 2) [lta-tracker]
18:23 <bd808> nova reset-state --active 8b764eb9-2dca-4902-a9c5-ed54fa3fc57d [lta-tracker]
18:21 <bd808> Attempting to reboot puppet-lta via OpenStack cli [lta-tracker]
18:16 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/503664 [releng]
17:44 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/502807 (postgres php72) [releng]
15:58 <ebernhardson> restart elasticsearch on elastic1027 [production]
15:34 <shdubsh> restart recommendation_api on scb1001 [production]
15:33 <shdubsh> restart recommendation_api on scb2001 [production]
10:46 <onimisionipe> depooling maps2001 for postgres init [production]
09:21 <elukey> re-run failed webrequest-text 2018-04-13-07 job - temporary failure between Hive and HDFS [analytics]
08:05 <gehel> repooling wdqs1008 - data transfer completed - T220830 [production]