6151-6200 of 9502 results (31ms)
2015-08-19 §
16:32 <marxarelli> Running apt-get install --reinstall elasticsearch to re-create missing /var/run/elasticsearch (and possibly others) directory [releng]
16:29 <marxarelli> Investigating stopped mysql on integration-slave-precise-1014 [releng]
16:21 <marxarelli> Reloading Zuul to deploy I6815cd66169ee8f6fbb5ea394e3a10ce6b6e7609 [releng]
16:17 <marxarelli> Reloading Zuul to deploy I48ab39e330ebc71266b72cae8449cc2f6da495fe [releng]
14:51 <jzerebecki> reload zuul for 700f380..0384ff5 [releng]
2015-08-18 §
18:33 <jzerebecki> (mysql wasn't started as puppet never got to that point) [releng]
18:32 <jzerebecki> /etc/init.d/elasticsearch start was looping endlessly because /var/run/elasticsearch/ did not exist even though it is part of the debian package elasticsearch which was installed. fixed the issue on this instance by: integration-slave-precise-1013:~# apt-get install --reinstall elasticsearch [releng]
16:32 <jzerebecki> offlined integration-slave-precise-1013 : Fails to connect to mysl. /etc/init.d/mysql start fails. [releng]
16:00 <jzerebecki> reloading zuul for 6486889..700f380 [releng]
2015-08-17 §
22:18 <legoktm> running schema change for [[gerrit:202344]] on beta [releng]
19:19 <legoktm> freeing up disk space on 1012 [releng]
19:15 <legoktm> [11:45:39] <legoktm> !log freeing up disk space on 1017 [releng]
19:15 <legoktm> restarted qa-morebots [releng]
18:45 <legoktm> freeing up disk space on 1017 [releng]
02:28 <legoktm> deploying https://gerrit.wikimedia.org/r/231902 [releng]
2015-08-16 §
08:24 <jzerebecki> reloading zuul for 340476e..9270810 [releng]
2015-08-15 §
00:38 <marxarelli> Reloading Zuul to deploy I9ef82b6d3ea7d83de8e4a67c9715ccf335c00b88 [releng]
2015-08-14 §
18:52 <thcipriani> disconnect/reconnect for deployment-bastion jenkins slave—left over stalled jobs went away [releng]
18:43 <greg-g> killed some of the queued jobs (beta-scap etc) via clicking on the red X [releng]
18:42 <thcipriani> disconnected and reconnected deployment-bastion jenkins slave [releng]
16:14 <ostriches> fixed deployment-cache-upload04 [releng]
05:21 <bd808> varnish-fe on deployment-cache-upload04.deployment-prep.eqiad.wmflabs not starting because nginx isn't starting because ssl cert is missing. No port 80 listener to serve images [releng]
2015-08-13 §
23:27 <legoktm> deploying https://gerrit.wikimedia.org/r/230256 [releng]
21:47 <bd808> triggered beta-scap-eqiad jenkins job [releng]
21:46 <bd808> Primed keyholder agent via `sudo -u keyholder env SSH_AUTH_SOCK=/run/keyholder/agent.sock ssh-add /etc/keyholder.d/mwdeploy_rsa` [releng]
20:21 <cscott> deployed I2e792ca14a35a79e7846b0ed03a36adf55fe338f to zuul (and reloaded) [releng]
19:28 <cscott> deployed 0c0f6e936bacfffde432ecf1e53f73f037ca6c42 to zuul (and jenkins) [releng]
17:43 <marxarelli> Reloading Zuul to deploy I0159f6dba5e187bfc5fe2b680408f35aca6ca2fe [releng]
2015-08-12 §
22:16 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/231179/ (Disable authentication for Kibana) [releng]
21:01 <marxarelli> Reloading Zuul to deploy I11bcac5b35a8f36cf3eb43caf7b792de6105a501 and I4bec54d445cb41cba3d6f5d9bd74ffe823b2c7ad [releng]
20:46 <urandom> restarted restbase on deployment-restbase01 (dead) [releng]
18:58 <bd808> Applied https://gerrit.wikimedia.org/r/#/c/231049/ via cherry-pick [releng]
16:50 <bd808> Fixed puppet merge conflict [releng]
2015-08-11 §
21:54 <bd808> Cherry-picked https://gerrit.wikimedia.org/r/#/c/230922/ for testing [releng]
11:36 <hashar> Fixed puppet on integration-slave-trusty-1017 . The puppet.conf servername had: <tt>integration-puppetmaster.</tt> , appended the sub domain/domain to it. [releng]
02:02 <legoktm> deleted beta-recompile-math-texvc-eqiad from jenkins [releng]
02:01 <legoktm> deploying https://gerrit.wikimedia.org/r/229376 [releng]
01:27 <legoktm> deploying https://gerrit.wikimedia.org/r/230619 [releng]
00:45 <bd808> Logstash properly creating new indices again and logs are being collected [releng]
00:42 <bd808> Fixed default mapping for logstash indices to have 0 replicas [releng]
00:40 <bd808> Crap! logstash errors are my fault. I updated the default index mapping and neglected to correct the replica count. Missing all data from 2015-08-08 to now [releng]
00:36 <bd808> Elasticsearch index for logstash-2015.08.10 missing/corrupt [releng]
00:34 <bd808> Restarted elasticsearch on deployment-logstash2 [releng]
00:32 <bd808> Started logstash on deployment-logstash2; process had died from OOM [releng]
00:02 <marxarelli> clearing disk space on integrations-slave-trusty-1011, integrations-slave-trusty-1012, integrations-slave-trusty-1013 [releng]
2015-08-10 §
23:57 <marxarelli> clearing disk space on integrations-slave-trusty-1016 with `find /mnt/jenkins-workspace/workspace -mindepth 1 -maxdepth 1 -type d -mtime +15 -exec rm -rf {} \;` [releng]
23:57 <marxarelli> clearing disk space on integrations-slave-trusty-1014 with `find /mnt/jenkins-workspace/workspace -mindepth 1 -maxdepth 1 -type d -mtime +15 -exec rm -rf {} \;` [releng]
23:53 <marxarelli> clearing disk space on integrations-slave-trusty-1014 [releng]
23:04 <bd808> updated scap to a404a39: Build wikiversions.php in addition to wikiversions.cdb [releng]
22:51 <bd808> testing https://gerrit.wikimedia.org/r/#/c/230679 via cherry-pick to /srv/deployment/scap/scap [releng]